Why HTTP protocol is designed in plain text way? - http

Yesterday, I have a discussion with my colleagues about HTTP. It is asked why HTTP is designed in plain text way. Surely, it can be designed in binary way just like TCP protocol, using flags to represents different kinds of method(POST, GET) and variables (HTTP headers). So, why HTTP is designed in such way? Is there any technical or historical reasons?

A reason that's both technical and historical is that text protocols are almost always preferred in the Unix world.
Well, this is not really a reason but a pattern. The rationale behind this is that text protocols allows you to see what's going on on the network by just dumping everything that goes through. You don't need a specialized analyzer as you need for TCP/IP. This makes it easier to debug and easier to maintain.
Not only HTTP, but many protocols are text based (e.g., FTP, POP3, SMTP, IMAP).
You might want to take a look at The Art of Unix Programming for a much more detailed explanation of this Unix thing.

With HTTP, the content of a request is almost always orders of magnitude larger than the protocol overhead. Converting the protocol into a binary one would save very little bandwidth, and the easy debugability that a text protocol offers easily trumps the minor bandwidth savings of a binary protocol.

Many Internet application protocols use more or less plain text for the protocol (see FTP, POP, SMTP, etc.).
It makes interoperability and troubleshooting much easier.

HTTP stands for "Hypertext Transfer Protocol".
It was initially devised as a way to serve text documents, hence the text based protocol.
What we do with HTTP now is far beyond its original intent.

As with RFC 2616 section 3.7.1 for HTTP 1.1, the key identifier to a line of command or header is the text line-break CRLF; text-based application protocols makes it easier to carry out a conversation (for troubleshooting) purely with a Telnet client. It also makes it easier to program with ReadLine() calls and matching text strings.
The CRLF parameter break also gives near-unlimited abitrary header extensions unlike a fixed-size TCP or IP headers where one hard-codes by bit offsets.

So it's easier to "read" the traffic or create a client or server?
You can debate whether it actually makes it easier, but surely that was the intent.

In the case of http ,some people work on a "binary" version of it, they called it Embedded Binary HTTP (EBHTTP)
https://datatracker.ietf.org/doc/html/draft-tolle-core-ebhttp-00

Historically, it all starts from RFC822 (STANDARD FOR THE FORMAT OF ARPA INTERNET TEXT MESSAGES), whose latest version is RFC5322 (Internet Message Format). SMTP (RFC 821) was one of the most popular protocol based on RFC822. And, HTTP was born out of SMTP (your mail protocol).

I like the:
...preferred in the Unix world.
reason, but it doesn't go into any explanation for why.
In order to understand why you need to place yourself into the shoes of a designer that wants to make a usable product.
A) You can document the shit out of meaningless gibberish (binary).
B) Develop or hope others develop tools that portray your meaningless gibberish in a meaningful way.
or
A) You can document the shit out of meaningful text that takes advantage of language as a tool for a self-documenting protocol.
B) There is no immediate need for additional tools, and additional tools will be much easier to write and debug.
It creates staged delivery and creates something that is easier to comprehend & recall when doing future development. It also creates a situation where a higher level abstraction is no longer necessary.
Imagine a world where setting a header value isn't as simple as dictionary/Map somewhere in your framework. When running into errors you'd have to constantly question whether or not your framework is correct or not, because you couldn't easily see it's doing the right thing without additional tools. That would be the world of HTTP if each framework needed to invent/implement it's own higher level abstraction (browsers come to mind).
Many protocol designer's want efficiency, this design focuses on usability, which is paramount in the software development industry. Unusable tools that are prematurely optimized create an unnecessary burden for software developers, and this burden manifests across the board.

Now,HTTP/2 based Binary,it is much less error-prone.
https://http2.github.io/faq/#why-is-http2-binary

Related

How to author an Internet protocol?

We're all familiar with popular protocols like IMAP and POP, used for email messaging.
I have a plan for a new protocol, but I'm not sure to go about implementing it.
Is the protocol a collection of C source code, for example, that accepts and sends data through ports? Or is a protocol just a thorough description of how data should be sent, which clients then implement?
I'm lost where to start here, and I'm not very familiar with how the protocol system works.
Edit:
Also, if I write a protocol and it isn't made official by the standards group, can people/clients still implement it?
The official way is to write an RFC - a Request for Comments. People will respond to that (that's why it's an RFC) and probably try to implement your protocol.
As soon as two independent implementations exist that completely support the protocol, it's a new standard.
Of course, people aren't going to implement a new protocol for someone just for fun. So you should first find a group who is interested in listening to you. Maybe there already is a protocol which does what you want (or can easily be extended).
But you probably don't want to invent a new standard. Standards are a lot of work and - for some - overrated.
So you should describe how it works and create a library that can read and write the protocol, so developers can use it even though it's not an official standard.
As you are interested in the Replace Email section of the Paul Graham article you linked, then IMHO you will need to both develop a protocol definition, and also provide an example implementation. The protocol definition does not need to be published as an internet protocol standard in order to be useful.
You will need an implementation to so that you can test, refine and improve the ideas. It is extremely unlikely the protocol will be right at the first attempt, and you'll need something to support the initial users.
You don't need a protocol definition to implement an improved email, but you will need one if you expect others to work with you and adopt it, though it very much depends on your 'business model'. I strongly recommend you have a protocol definition from the start, even if only to keep yourself sane when you try to produce the second implementation.
I recommend having a look at some examples of sneaky approaches to protocols and implementation. My favourite is described in the Viewpoints Research 2008 Progress report on a super-compact approach to TCP/IP.
They did not follow the traditional approach to developing the implementation of a protocol (the protocol stack). Instead they wrote code which parsed the human-readable TCP/IP protocol specification, and generated the code of a TCP/IP stack from that protocol document. The usual TCP/IP stack is about 40,000 lines of code, or more. Their program, which read the protocol specification, and generated the code for a TCP/IP stack 'automatically' was only 160 lines of code. They use extremly powerful programming tools.
If you had an approach like that, you could keep the protocol implementation synchronised with the specification, and potentially make it straightforward for others to adopt your protocol.
HTH
You are confusing a protocol standard with the implementation.
These 2 are unrelated.
A protocol is described in a high level but has enough information for someone to undestand how it should be implemented.
The idea is that someone reading the document can understand how/what to implement in any language of preference
To give an example: SIP protocol in the RFC describes the various flows and also has the various messages and how they are supposed to b processed i.e. the semantics well defined.
You can implement a SIP UA or Server in C++ or Java. This is irrelevant to the SIP protocol
For this you don't need to provide any source code (you could though if you think it helps clarify some obscurity of the description).
The most important part is that your protocol is actually reviewed by stakeholders i.e. people that expect it to solve their problems.
This part is the most important not only because it could solve problems in your protocol but because they can actually verify that the concept is solid i.e. can be technically implemented
The only case that one could specify something concrete or imply something is if for example the protocol described something demanding some specific constraints e.g. hard-real time constraint which could serve as "hint" on which implementation/languages to avoid
Also, if I write a protocol and it isn't made official by the
standards group, can people/clients still implement it?
Strange question.What do you mean?How will someone know your protocol exists?
If it is official he can get it from the standards group to implement it.
Otherwise it is obvious that you have some sort of "proprietary" protocol (which is not uncommon e.g. a company can have an internal protocol for its own software) and people have to get the spec from you.

Question about using XML as an application-level protocol

I am wondering if I could use XML as the application-level protocol. The XML will just sit on-top of the TCP protocol and the 2 applications will just need to know how to parse XML.
But the problem is, XML is not compact enough. Should I use some algorithm to compact the XML payload before send it with TCP? If I do compact it, then the compressing/decompressing cost will be involved on both ends. What a dilemma. Any good suggestions? Or I take the wrong approach?
Many thanks.
Define "compact enough", have you measured it to be too slow for your application? Avoid premature optimisation.
As with any protocol, there are trade-offs in various directions. XML buys you a well-known cross-platform format with libraries for just about any language, that's capable of representing all kinds of structured data. XMPP opts for this, and uses optional compression for bandwidth-constrained setups. Experiments in the XMPP world with alternate representations have rarely proven worth the effort.
A notch down from XML, but still providing many of the advantages is JSON. While it lacks namespacing it's fairly simple, and libraries are near as common as XML ones. Still, JSON is text-based and may still be verbose for some situations.
The last sensible choice would be a binary protocol. This has advantages in that you can tailor and optimise it specifically to your application. The disadvantages are that you have to write the parsing and serialization yourself, although there are tools to automate this such as Google's Protocol Buffers project.
Ultimately all of these are suitable in different places, and the choice is up to the application developer for which one they should use for a given project.
I'd say that you can use straight XML for your messages and don't bother with compression or anything (XML isn't exactly blazingly fast) if you need a simple way of specifying how your messages look. If you need something faster and more compact, take a look at Google Protocol Buffers or Thrift (I personally prefer Protocol buffers, but everyone is not like me..).
Using XML can for instance be a good way if you need to make your interface be easy to interoperate with lots of different clients for instance (such as Web Services). If on the other hand you are always in charge of both ends of the communication, you can always do something quick to get going and then optimize later.

Good tools to understand / reverse engineer a top layer network protocol

There is an interesting problem at hand. I have a role-playing MMOG running through a client application (not a browser) which sends the actions of my player to a server which keeps all the players in sync by sending packets back.
Now, the game uses a top layer protocol over TCP/IP to send the data. However, wireshark does not know what protocol is being used and shows everything beyond the TCP header as a dump.
Further, this dump does not have any plain text strings. Although the game has a chat feature, the chat string being sent is not seen in this dump as plain text anywhere.
My task is to reverse engineer the protocol a little to find some very basic stuff about the data contained in the packets.
Does anybody know why is the chat string not visible as plain text and whether it is likely that a standard top level protocol is being used?
Also, are there any tools which can help to get the data from the dump?
If it's encrypted you do have a chance (in fact, you have a 100% chance if you handle it right): the key must reside somewhere on your computer. Just pop open your favorite debugger, watch for a bit (err, a hundred bytes or so I'd hope) of data to come in from a socket, set a watchpoint on that data, and look at the stack traces of things that access it. If you're really lucky, you might even see it get decrypted in place. If not, you'll probably pick up on the fact that they're using a standard encryption algorithm (they'd be fools not to from a theoretical security standpoint) either by looking at stack traces (if you're lucky) or by using one of the IV / S-box profilers out there (avoid the academic ones, most of them don't work without a lot of trouble). Many encryption algorithms use blocks of "standard data" that can be detected (these are the IVs / S-boxes), these are what you look for in the absence of other information. Whatever you find, google it, and try to override their encryption library to dump the data that's being encrypted/decrypted. From these dumps, it should be relatively easy to see what's going on.
REing an encrypted session can be a lot of fun, but it requires skill with your debugger and lots of reading. It can be frustrating but you won't be sorry if you spend the time to learn how to do it :)
Best guess: encryption, or compression.
Even telnet supports compression over the wire, even though the whole protocol is entirely text based (well, very nearly).
You could try running the data stream through some common compression utilities, but I doubt that'd do much for you, since in all likelihood they don't transmit compression headers, there's simply some predefined values enforced.
If it's infact encryption, then you're pretty much screwed (without much, much more effort that I'm not even going to start to get into).
It's most likely either compressed or encrypted.
If it's encrypted you won't have a chance.
If it's compressed you'll have to somehow figure out which parts of the data are compressed, where the compressed parts start and what the compression algorithm is. If your lucky there will be standard headers that you can identify, although they are probably stripped out to save space.
None of this is simple. Reverse engineering is hard. There aren't any standard tools to help you, you'll just have to investigate and try things until you figure it out. My advice would be to ask the developers for a protocol spec and see if they are willing to help support what you are trying to do.

TCP Vs. Http Benchmark

I am having a Web application sitting on IIS, and talking with [remote]Service-Machine.
I am not sure whether to choose TCP or Http, as the main protocol.
more details:
i will have more than one service\endpoint
some of them will be one-way
the other will be two-ways
the web pages will work infront of the services
we are talking about hi-scale web-site
I know the difference pretty well, but I am looking for a good benchmark, that shows how much faster is the TCP?
HTTP is a layer built ontop of the TCP layer to some what standardize data transmission. So naturally using TCP sockets will be less heavy than using HTTP. If performance is the only thing you care about then plain TCP is the best solution for you.
You may want to consider HTTP because of its ease of use and simplicity which ultimately reduces development time. If you are doing something that might be directly consumed by a browser (through an AJAX call) then you should use HTTP. For a non-modern browser to directly consume TCP connections without HTTP you would have to use Flash or Silverlight and this normally happens for rich content such as video and/or audio. However, many modern browsers now (as of 2013) support API's to access network, audio, and video resources directly via JavaScript. The only thing to consider is the usage rate of modern web browsers among your users; see caniuse.com for the latest info regarding browser compatibility.
As for benchmarks, this is the only thing I found. See page 5, it has the performance graph. Note that it doesn't really compare apples to apples since it compares the TCP/Binary data option with the HTTP/XML data option. Which begs the question: what kind of data are your services outputting? binary (video, audio, files) or text (JSON, XML, HTML)?
In general performance oriented system like those in the military or financial sectors will probably use plain TCP connections. Where as general web focused companies will opt to use HTTP and use IIS or Apache to host their services.
The question you really need an answer for is "will TCP or HTTP be faster for my application". The answer is that it depends on the nature of your application, and on the way that you use TCP and/or HTTP in your application. A generic HTTP vs TCP benchmark won't answer your question, because the chances are that the benchmark won't match your application behaviour.
In theory, an optimally designed / implemented solution using TCP will be faster than one that uses HTTP. But it may also be considerably more work to implement ... depending on the details of your application.
There are other issues that might affect your choice. For example, you are less likely to run into firewall issues if you use HTTP than if you use TCP on some random port. Another is that HTTP would make it easier to implement a load balancer between the IIS server and the backend systems.
Finally, at the end of the day it is probably more important that your system is secure, reliable, maintainable and (maybe) scalable than it is fast. A sensible strategy is to implement the simple version first, but have plans in your head for how to make it faster ... if the simple solution is too slow.
You could always benchmark it.
In general, if what you want to accomplish can be easily done over HTTP (i.e. the only reason you would otherwise think about using raw TCP is for a possible performance boost) you should probably just use HTTP. Sure, you can do socket programming, but why bother? Lots of people have spent a lot of time and effort building HTTP client libraries and servers, and they have spent waaaaaay more time optimizing and testing that code than you will ever be able to possibly spend on your TCP sockets. There are simply so many possible errors that you would have to handle, edge cases, and optimizations that can be done, that it is usually easier and safer to use a library function for HTTP.
Plus, the HTTP specs define all kinds of features (and clients/servers implement, which you get to use "for free", i.e. no extra implementation work) which makes any third-party interoperability that much easier. "Here is my URL, here are the rules for what you send, here are the rules for what I return..."
I have a Self Hosted Windows native C++ server application that I use the Casablanca C++ REST SDK code in. I can use any client C#, JavaScript, C++, cURL, basically anything that can send a POST, GET, PUT, DEL message can be used to send request messages to this self hosted windows app. Also I can use a plain browser address bar to do GET related requests using various parameters. Currently I only run this system on a private intranet so it is very fast - I haven't benchmark it against just doing raw TCP, but on a private intranet I doubt there would be even a few microseconds difference? For the convenience and ease of development and ability to expand to full blown internet app it's a dream come true. It is a dedicated system with a private protocol using small JSON packets so not certain if that fits your application needs or not? Another nice thing is this Windows application native C++ code could be ported fairly easily to run on Linux/MacOS as the Casablanca REST SDK is portable to those OSes.

Designing an application protocol [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have an existing standalone application which is going to be extended by a 3rd-party, using a network protocol. The capabilities are already implemented, all I need is to expose them to the outside.
Assuming the transport protocol is already chosen (UDP), are there any resources that will help me to design my application protocol?
There seems to be a lot of information about software design, but not on protocol design.
I've already looked at Application Protocol Design.
See Jabber protocols design guidelines and RFC 4101. Although it is aimed at making RFCs more easy to understand to reviewers, this RFC provides some interesting advices.
Have you looked at Google Protocol Buffer? It seems like a good way to resolve this issue.
You can create an endpoint that communicates with your existing app and then responds from 'outside' using the protobuffer protocol. It's binary, so it's tiny and fast and you don't have to write your own protocol manager, 'cause you can use the Google ones. The downside is that it has to be implemented on both sides of the system (on your 'server' side and on the consumer/client side).
Another recommendation for protocol buffers - nice tight binary with little effort. Note, however, that while the binary protocol is well defined, there isn't yet an agreed RPC standard (several are in progress, tending to lean towards TCP or HTTP).
The spec makes it very easy to have the client and server in different architectures, which is good - plus it is extensible.
Caveat: I'm the author of one of the .NET versions, so I may well be biased ;-p
First off, UDP is primarily a one-way broadcast transport method. Also, it is potentially lossy, so you need to be able handle missing packets and out-of-order packets. If you need any level of reliability from UDP, or require two-way connections, you will end up needing just about everything from TCP, so you might as well go with that to start with and let the network stack take care of it.
Next up, if your data is potentially larger than a single IP packet then you will need some way of identifying the start and end of each packet, and a means of handling illegal or corrupt packets. I would recommend some kind of header with packet length, some kind of footer, and maybe a checksum.
Then you need some way of encoding the messages and responses. There are many RPC protocols around. You could look at SOAP, or design a custom XML-based protocol, or a binary one.
You should really think hard about whether you really want to design, document and maintain your own protocol or use something that is already existing. It is probable there is already a documented protocol that matches your needs. Depending on what you are doing it will probably look overkill at first and implementing all the spec will look tedious and a lot less fun than writing your own but if you intend for your application to still be actively developed in a few years it should save you a lot of time and money to use something that already exist and is known by third parties. Besides, if you can use an existing library for that protocol, the implementation part should be a lot faster.
Designing new protocol is more fun than implementing one but less than maintaining one as you have to live with all the defects. No protocol is perfect but if you have never designed one you can be assured you will make more mistake designing it than the people who designed the existing well known protocol you could use instead.
In short, leverage what already exists whenever possible.
If you're choosing XML keep in mind that you will have a giant overhead of markup.
A simple binary protocol will also be need not so much ressources to parse compared to xml.
If you do not want to build your protocol from ground up, you should take a look at SOAP. Support varies for different programming languages, but cross language communication is explicitly encouraged.
Unfortunately UDP and SOAP seem to have stuck in its infancy, HTTP is most commonly used.
I have an existing standalone application which is going to be extended by a 3rd-party, using a network protocol.
It would help to know a little more about what your program does and what the nature of these 3rd party extensions are. Maybe some rationale for using UDP?

Resources