I'm looking for standards that are behind realtime web applications.
I know about W3C Websockets API and IETF Websockets protocol, Bayeux protocol and Server-Sent Events standards.
Are there any other standards for techniques like long-polling, callback-polling, Iframe streaming, htmlfile streaming, XHR streaming, multipart streaming, Direct Socket?
Long polling doesn't have a dedicated standard. It is effectively an implementation technique layered on top of existing standards like HTTP and XMLHttpRequest (which is standardized as W3C working drafts). The Wikipedia page is a pretty good reference.
XMPP standardizes a technique called BOSH which is also implemented as long-lived HTTP.
multipart/x-mixed-replace was implemented by Netscape but not IE, and is not a standard. The Push technology Wikipedia page is a good reference.
Hope these help.
If anyone is interested in a Java implementation I just wrote a sample app and a blog post about it. It uses Java, Maven, Comet, Bayeux, Spring.
http://jaye.felipera.cloudbees.net/
http://geeks.aretotally.in/thinking-in-reverse-not-taking-orders-from-yo
I have found an interesting answer on quora (http://www.quora.com/What-are-the-standards-behind-realtime-web) :
The following protocols are core to the Realtime Web:
HTTP protocol in general makes so much possible WebSockets protocol
PubSubHubbub protocol
Webhooks eXtensible Messaging and Presence Protocol (XMPP) & BOSH
(http://xmpp.org/extensions/xep-0...)
Activity Streams (as pointed
out by Chris Saad)
http-live-streaming / HTTP Long-Polling
Related
In this RabbitMQ documentation, MQTT, AMQP and STOMP are referred to as supported message protocols. If you consider the differences between MQTT, AMQP and STOMP, this is completely understandable to me.
However, at the end of this article it becomes confusing. That's about HTTP. This paragraph states that "HTTP is not a course not a messaging protocol". I had thought that HTTP would also be directly supported by RabbitMQ in one way or another, but is only supported for 'low volume messaging purposes' ( diagnostics for example) and for direct use in HTML.
If half the world uses HTTP web api services, why HTTP could not be shared among the messaging protocols. Why is HTTP not a messaging protocol and what is the definition RabbitMQ uses of a messaging protocol?
HTTP falls squarely into the synchronous request-response protocols category. This is the very opposite of asynchronous message passing protocols typical of Message-Oriented Middleware.
The 'half of the world' that uses HTTP for web api services does not use it as a loose coupled messaging based web API service, but as a tightly coupled request-response based API.
Messaging protocols come with certain characteristics (at-least-once, exactly-once, at-most-once, exactly-once-in-order etc) which are provided by the protocol definition and implementation. Attempting to do messaging over HTTP quickly devolves into replicating these requirements (retries, sequence numbers, duplicate handling etc) at a layer above HTTP and deprecating HTTP to a transport layer which offers little value from a messaging point of view.
Couple of months ago, HTTP/2 was published as RFC7540.
How will this affect the existing REST API built on HTTP/1.1?
As per Wikipedia, HTTP/2 has added new features.
How can we take advantage of these new features?
The main semantic of HTTP has been retained in HTTP/2. This means that it still has HTTP methods such as GET, POST, etc., HTTP headers, and URIs to identify resources.
What has changed in HTTP/2 with respect to HTTP/1.1 is the way the HTTP semantic (e.g. "I want to PUT resource /foo on host example.com") is transported over the wire.
In this light, REST APIs built on HTTP/1.1 will continue to work transparently as before, with no changes to be made to applications. The web container that runs the applications will take care of translating the new wire format into the usual HTTP semantic on behalf of the applications, and application just see the higher level HTTP semantic, no matter if it was transported via HTTP/1.1 or HTTP/2 over the wire.
Because the HTTP/2 wire format is more efficient (in particular due to multiplexing and compression), REST APIs on top of HTTP/2 will also benefit of this.
The other major improvement present in HTTP/2, HTTP/2 Push, targets efficient download of correlated resources, and it's probably not useful in the REST usecase.
A typical requirement of HTTP/2 is to be deployed over TLS.
This require deployers to move from http to https, and setup the required infrastructure to support that (buy the certificates from a trusted authority, renew them, etc.).
HTTP/2 spec intentionally did not introduce new semantics for application programmer. In fact, major client-side libraries (NSUrlSession on iOS, OkHttp on Android, React Native, JS in browser environment) support HTTP/2 transparently to you as a developer.
Due to HTTP/2 ability to multiplex many requests over single TCP connection, some optimizations application developers implemented in the past, such as request batching would become obsolete and even counter-productive.
Push feature would likely be utilized to deliver events and will be able to replace polling and possibly websockets in some applications.
One possible application of server push feature in HTTP/2 to REST APIs is ability to accelerate legacy applications on i.e. reverse proxy level by pushing anticipated requests ahead of time to the client, instead of waiting for them to arrive. I.e. push answers to user profile, and other common API calls right after login request is complete.
However Push is not yet widely implemented across server and client infrastructure.
The main benefit I see is Server Push for hypermedia RESTful APIs, which hold references to resources, often absolute domain-dependent URLs such as /post/12.
Example: GET https://api.foo.bar/user/3
{
"_self": "/user/3"
"firstName": "John",
"lastName": "Doe",
"recentPosts": [
"/post/3",
"/post/13",
"/post/06
]
}
HTTP/2 Push can be used to preemptively populate the browser cache if the server knows the client will likely want to do certain GET requests in the future.
In the above example, if HTTP/2 Server Push is activated and properly configured, it could deliver /post/3, /post/13 and /post/06 along with /user/3.
Successive GETs to one of those posts would result in cached responses. Also, the cache digest draft would allow client to send information about the state of their cache, avoiding unnecessary pushes. This is much more practical for Hypermedia-driven APIs then embedded bodies such has does HAL.
More information on the reasons here: The problems with embedding in REST today and how it might be solved with HTTP/2.
Why http based on request/response? Why server can't push data with http to client directly and must has to be response of client request? In start of connection I know that client has to send request but why after that client must continue request/response/req/resp. long polling, comet, Bosh and other server pushing method also based on req/resp method and not solve the question.
All your problems ok! RFC 6455 defines the WebSocket protocol. HTTP 1.1 supports bi-directional TCP-like sockets that do not require you follow the request/reply pattern. The original spec had only UTF-8 character coding support, but now with modern browsers, binary data can be sent down the wire as well. Working with WebSockets presents a new manner of framing a web application but it's growing browser support makes it a viable option for modern websites.
Node.js is the easiest way to get into using WebSocket's with the Socket.IO library. Do check it out.
There's too much elaboration about the HTTP protocol. But to its essensce, it's nothing but a string of ASCII characters transmitted over the TCP protocol. And the string defines the semantic of the protocol. Am I right on this?
If so, 2 questions follows:
Can we devise any protocols as we want, cause it just looks like
passing strings over the internet.
Why don't we compress the HTTP strings before we pass it down to the TCP level?
That's right, HTTP is by no means a special, but because it underpins the web it receives a lot of attention. It's an application level protocol like SMTP or FTP or any other.
Yes, you could design any protocol you like. For fun, grab an RFC for SMTP, FTP or HTTP and connect to your own server and learn the protocol. RFC2324 is also required reading - http://www.faqs.org/rfcs/rfc2324.html
Lack of HTTP header compression has been talked about a lot in recent years. See Steve Souders blog/books, YSlow! and Google Page Speed sites. The SPDY protocol is probably going to be the front runner at addressing several of the current issues with HTTP connection management, performance and security - http://www.chromium.org/spdy/spdy-whitepaper
Sure. But you would have to get others to adopt your protocol (unless it is an internal/proprietary spec). And if you can coherently express your communique in the form of HTTP, why not use it? It's widely implemented in virtually every language and operating system, and is well understood and easily debugged. Don't just create protocols for the heck of it.
The HTTP specification provides for several common compression schemes. gzip and deflate are particularly widely used. See, for example, Apache's mod_gzip and mod_deflate. Clients and servers routinely negotiate compression on your behalf.
Is there any advantage of using RTSP for serving static (not live stream) TS videos?
HTTP looks like better for this purpose and triggers less bugs in client software. Seeking works better, fast-forward playback works smoother. It also does not require any indexes.
Are there any pitfalls when migrating from RTSP to plain HTTP?
Usage of HTTP for media streaming, while sponsored by Apple iOS products, is probably not as standard as RTSP. There are many HTTP streaming implementations, whose issues are mainly:
inability to PAUSE;
byte-range seeks.
Other than this, HTTP is better for many reasons.
See this article.
HTTP "streaming" servers will (in most cases) use TCP as their network transport, RTSP servers usually offer RTP over UDP which is more suited to multimedia streaming where some errors/packet loss can be tolerated with the benefit of lower latency and less network overheads.
Also, as RTSP it is over UDP, it can be used inside LAN with multicast/broadcast. This is quite useful for TV streams as you will use less bandwidth.