What does operations are done with gRPC, over HTTP/2 means. I am interested in knowing how does gRPC and HTTP/2 play along.
gRPC is a protocol that uses HTTP/2. The messages you send are encoded as gRPC frames (5 byte header) and packaged into HTTP/2 DATA frames. The HTTP/2 HEADERS frames are used to propagate headers and trailers at the beginning and end of the call.
It would be possible to use gRPC over other protocols, though this is less common as of this writing. For example:
gRPC can be used In Process, meaning there is no wire encoding. You still get to use the same gRPC API and Stubs though. This is commonly used for testing
QUIC: This is a UDP based protocol that is an alternative to HTTP/2, but which has HTTP semantics. This is used on Android Java when using the AndroidChannelBuilder.
HTTP/1.1: This is used for gRPC Web. Some minor modifications are needed to the gRPC protocol, but it can work from regular web browsers which currently don't support certain parts of HTTP/2.
Related
Since HTTP/2.0 leverages request multiplexing, I've been wondering whether using HTTP/2.0 over HTTP/1.1 for HLS streaming.
My current HLS stream seems to be using HTTP/1.1, at least, that's what I gathered when I inspected my Native HLS Playback Chrome extension in the network tab, with all the Media Playlists and TS chunks being transfered over HTTP/1.1.
At this point, I've found no information on HLS over HTTP/2.0, but some info on MPEG-DASH over HTTP/2.0, so I'm wondering whether HLS over HTTP/2.0 is even possible. If it is, will I get lower latency between server and client devices?
Related question: does Exoplayer support HLS over HTTP/2.0?
Yes, it is possible, no it won’t reduce latency.
Apple released recently its spec for low latency. It includes HTTP/2 push support for very low latency.
See their specification here
It'll probably deliver your video data as fast as other real-time protocol into safari and iOS. :)
HLS and Dash are protocols on top of HTTP, and are e.g. defined in terms of HTTP GET requests. They will work on HTTP/1.1 as well as HTTP/2. The only question is whether the client supports HTTP/2, and whether the server which is serving the stream supports HTTP/2. Those questions would need to be asked for each application individually.
My estimate is that HTTP/2 support wouldn't yield too much advantage for those use-cases. HTTP/2 is best at multiplexing lots of small requests over a single TCP connection, and avoiding the overhead of creating new connections. With video streaming, there are not that many concurrent small requests - instead the client fetches larger chunks piece by piece. This should work pretty fine on top of HTTP/1.1 connections with keep-alive (which also doesn't require a new connection per request - it just doesn't support parallel requests).
HTTP/1.1 might be even more efficient than HTTP/2 for this special use-case, since there is not the overhead of another flow-control mechanism on top of TCP.
From my understanding, HTTP/2 comes with a stateful header compression called HPACK. Doesn't it change the stateless semantics of the HTTP protocol? Is it safe for web applications to consider HTTP/2 as a stateless protocol? Finally, will HTTP/2 be compatible with the existing load balancers?
HTTP/2 is stateless.
Original HTTP is a stateless protocol, meaning that each request message can be understood in isolation. This means that every request needs to bring with it as much detail as the server needs to serve that request, without the server having to store a lot of info and meta-data from previous requests.
Since HTTP/2 doesn't change this paradigm, it has to work the same way, stateless.
It's clearly visible from official RFCs as well. It is stated:
The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol which can be used for many tasks...
and the definition of HTTP/2 says:
This specification describes an optimized expression of the semantics of the Hypertext Transfer Protocol (HTTP), referred to as HTTP version 2 (HTTP/2)... This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.
Conclusion
HTTP/2 protocol is stateless by design, as semantics remain unchanged in comparison to original HTTP.
From where confusion may come
An HTTP/2 connection is an application-layer protocol running on top of a TCP connection (BTW, nothing stops you to use HTTP over UDP for example, it's possible, but UDP is not used because it is not a "reliable transport"). Don't mix it with session and transport layers. HTTP protocol is stateless by design.
HTTP over an encrypted SSL/TLS connection, also changes nothing to this statement, as S in HTTPS is concerned with the transport, not the protocol itself.
HPACK, Header Compression for HTTP/2, is a compression format especially crafted for HTTP/2 headers, and it is being specified in a separate internet draft. It doesn't change HTTP/2 itself, so it doesn't change the semantics.
In RFC for HTTP/2 in section about HPACK they state:
Header compression is stateful. One compression context and one
decompression context are used for the entire connection.
And here's why from HPACK's RFC:
2.2. Encoding and Decoding Contexts
To decompress header blocks, a decoder only needs to maintain a
dynamic table (see Section 2.3.2) as a decoding context. No other
dynamic state is needed.
When used for bidirectional communication, such as in HTTP, the
encoding and decoding dynamic tables maintained by an endpoint are
completely independent, i.e., the request and response dynamic tables
are separate.
HPACK reduces the length of header field encoding by exploiting the
redundancy inherent in protocols like HTTP. The ultimate goal of
this is to reduce the amount of data that is required to send HTTP
requests or responses.
An HPACK implementation cannot be completely stateless, because the encoding and decoding tables, completely independent, have to be maintained by an endpoint.
At the same time, there are libraries, which try to solve HPACK issues, for example, a stateless event-driven HPACK codec CASHPACK:
An HPACK implementation cannot be completely stateless, because a dynamic table needs to be maintained. Relying on the assumption that HTTP/2 will always decode complete HPACK sequences, statelessness is achieved using an event-driven API.
Modern HTTP, including HTTP/2, is a stateful protocol. Old timey HTTP was stateless.
Many HTTP/2 components are the very definition of stateful.
No reasonable person can read the HTTP/2 RFC and think it is stateless. The errant "HTTP is stateless" old time dogma is false doesn't represent the current reality of HTTP.
Here's a limited, and not exhaustive list, of stateful HTTP/1 and HTTP/2 components:
Cookies, (named "HTTP State Management Mechanism" by the RFC)
HTTPS, which stores keys thus state
HTTP authentication requires state
Web Storage
HTTP caching is stateful
The very purpose of the stream identifier is state
Header blocks, which establish stream identifiers, are stateful.
Frames which reference stream identifiers are stateful
Header Compression, which the HTTP RFC explicitly says is stateful, is stateful.
Opportunistic encryption is stateful.
Section 5.1 of the HTTP/2 RFC is a great example of stateful mechanisms defined by the HTTP/2 standard.
Is it safe for web applications to consider HTTP/2 as a stateless protocol?
HTTP/2 is a stateful protocol, but that doesn't mean your HTTP/2 application can't be stateless. You can choose to not use certain stateful features for stateless HTTP/2 applications by using only a subset of HTTP/2 features.
Cookies and some other stateful mechanisms, or less obvious stateful mechanisms, are later HTTP additions. HTTP 1 is said to be stateless although in practice we use standardized stateful mechanisms. Unlike HTTP/1.0, HTTP/2 defines stateful components in its standard and is therefor stateful. A particular HTTP/2 application can use a subset of HTTP/2 features to maintain statelessness.
Existing applications, even HTTP 1 applications, needing state will break if trying to use them statelessly. It can be impossible to log into some HTTP/1.1 websites if cookies are disabled, thus breaking the application. It may not be safe to assume that a particular HTTP 1 application does not use state. This is no different for HTTP/2. Before Netscape invented cookies and HTTPS in 1994 http could be considered stateless.
Say it with me one last time:
HTTP/2 is a stateful protocol.
HTTP is mainly used for viewing web pages. Coap is a simplified version of HTTP for IoT or WSNs. Although COAP is based on UDP, it should have ACK messages to emulate TCP. Since COAP is simpler than HTTP, it will have lower latency and draw less power.
Then, why browsers and web servers do not replace HTTP with COAP? Given the previous arguments, is it expected that COAP will completely replace HTTP? Is it just a matter of time? Are there any features which are supported only by HTTP?
If coap is more efficient, can I say that http is useless in the future if we replace them to coap?
The industry plan is to improve HTTP by moving to HTTP/2, and HTTP/2 includes (amoung other features) a header compression, which should bring you similar benefits than CoAP.
While most web servers and some browsers today support HTTP/2 already, AFAIK no browser nor any server support CoAP. Same goes for TLS vs. DTLS.
Are there features coap cannot support but http can?
As you said, HTTP is TCP based, while CoAP is UDP based.
UDP requires that you send UDP pings every few seconds to keep the NAT/Firewall connection open, while in TCP typically it is only required every 15 min or so.
So if you need to keep the connection open (e.g. for push technologies), then CoAP is less efficient than HTTP (and HTTP/2).
CoAP was never intended to replace HTTP, while it seems to "emulate" http, it only because it follows the REStful Paradigm. CoAP is intended as a application layer for device and more specifically was design for Constraint device.
The REStful design was chosen also to facilitate proxying operation through http (there are recommendation for doing that in the RFC). But again was never intended to replace HTTP.
CoAP is build with small amount of resources in mind. The small header and different feature of CoAP are in place to make sure that constraint device have standards mean to communicate on the internet.
HTTP and CoAP each have their own purpose.
CoAP and HTTP con be used for different purposes. CoAP has been implemented for IoT and M2M environment,in other words,to send short messagges using UDP. For instance:
A typical CoAP exchange consists of 2 messages, i.e., a request and a
response. In contrast, an HTTP request first requires the client to establish a TCP
connection and later terminate it. This results in at least 9 messages for only one
request [11]. Note that this argument is not necessarily true for large payloads. After
TCP’s slow-start phase, it is able to send multiple packets at once and acknowledge
all of them with a single acknowledgement. CoAP’s blockwise transfer [8] though,
requires an acknowledgement for each block and leads to more messages and higher
transfer time. Since we expect the majority of CoAP messages to be rather short, this
is of less importance. However, CoAP’s blockwise mechanism allows a constrained
server to not only receive but also process a large request block-by-block. This
would not be possible if we used HTTP and TCP.
(Scalability for IoT CLoud Services by Martin Lanter)
Actually, Firefox can support CoAP using Copper(CU) plug-in. ;)
CoAP is optimized for the resource constrained networks and devices typical of IoT and M2M applications. It uses less resources than HTTP and can provide an environment for communication in WSNs, IoTs, and M2M comm. It is not made to replace HTTP.
HTTP has different application scenarios, while CoAP has a different one. HTTP is mainly designed for Internet devices where the power and other constraints are not an important issues. HTTP is more reliable than CoAP as it uses TCP.
Why http based on request/response? Why server can't push data with http to client directly and must has to be response of client request? In start of connection I know that client has to send request but why after that client must continue request/response/req/resp. long polling, comet, Bosh and other server pushing method also based on req/resp method and not solve the question.
All your problems ok! RFC 6455 defines the WebSocket protocol. HTTP 1.1 supports bi-directional TCP-like sockets that do not require you follow the request/reply pattern. The original spec had only UTF-8 character coding support, but now with modern browsers, binary data can be sent down the wire as well. Working with WebSockets presents a new manner of framing a web application but it's growing browser support makes it a viable option for modern websites.
Node.js is the easiest way to get into using WebSocket's with the Socket.IO library. Do check it out.
Stupid question, but just making sure here:
When should I use TCP over HTTP? Are there any examples where one is better than the other?
TCP is full-duplex 2-way communication. HTTP uses request/response model. Let's see if you are writing a chat or messaging application. TCP will work much better because you can notify the client immediately. While with HTTP, you have to do some tricks like long-polling.
However, TCP is just byte stream. You have to find another protocol over it to define your messages. You can use Google's ProtoBuffer for that.
Use HTTP if you need the services it provides -- e.g., message framing, caching, redirection, content metadata, partial responses, content negotiation -- as well as a large number of well-understood tools, implementations, documentation, etc.
Use TCP if you can't work within those constraints. However, if you use TCP you'll be creating a new application protocol, which has a number of pitfalls.