Compress file before upload via http - http

Is it possible to compress data being sent from the client's browser (a file upload) to the server?
Flash, silverlight and other technology is ok!

Browsers never compress uploaded data because they have no way of knowing whether the server supports it.
Downloaded content can be compressed because the Accept-Encoding request header allows the browser to indicate to the server that it supports compressed content. Unfortunately, there's no equivalent protocol that works the other way and allows the server to indicate to the browser that it supports compression.
If you have control over the server and client (e.g. using silverlight, flash) then you could make use of compressed request bodies.

For Silverlight there is a library called Xceed which amongst other things "Lets you compress data as it is being uploaded.", it is not free though. I believe that this can only be done via a technology such as Flash or Silverlight and not natively on the browser.
I disagree with the above poster about browsers doing this automatically and I believe this only happens with standard HTML/CSS/Text files and only if the server and browser both have compression enabled (gzip, deflate).

Related

Prevent browser from sending Expect header?

I am writing an embedded web server, and want to avoid unnecessary protocol aspects, to save limited flash memory. Is there a universal way to prevent browsers from sending the Expect: header when uploading a file?
From https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Expect:
No common browsers send the Expect header, but some other clients such
as cURL do so by default.
Does such language imply that future versions of common browsers will also not send the Expect header? If not, how would I prevent them from doing so?

How does browser download image and other binary files?

I want to know the exact mechanism behind the transfer of binary files using a browser. If the browser uses purely HTTP that means only text is allowed so the image is encoded using base64 and decoded later in browser? Or does the browser download this using some other mechanism where this encoding/decoding is not needed?
Just in case someone wants to know the answer. While you can send the binary data over HTTP using base64 encoding, it is not the most efficient process, as encoding and decoding is required. So when you request an image file using http, the server gives you the metadata information such as MIME type, content-length etc. Using this information, the HTTP agent (eg. browser) actually downloads the image directly using TCP and not HTTP.

Why bundle optimizations are no longer a concern in HTTP/2

I read in bundling parts of systemjs documentation that bundling optimizations no longer needed in HTTP/2:
Over HTTP/2 this approach may be preferable as it allows files to be
individually cached in the browser meaning bundle optimizations are no
longer a concern.
My questions:
It means we don't need to think of bundling scripts or other resources when using HTTP/2?
What is in HTTP/2 which makes this feature enable?
The bundling optimization was introduced as a "best practice" when using HTTP/1.1 because browsers could only open a limited number of connections to a particular domain.
A typical web page has 30+ resources to download in order to be rendered.
With HTTP/1.1, a browser opens 6 connections to the server, request 6 resources in parallel, wait for those to be downloaded, then request other 6 resources and so forth (or course some resource will be downloaded faster than others and that connection could be reused before than others for another request).
The point being that with HTTP/1.1 you can only have at most 6 outstanding requests.
To download 30 resources you would need 5 roundtrips, which adds a lot of latency to the page rendering.
In order to make the page rendering faster, with HTTP/1.1 the application developer had to reduce the number of requests for a single page.
This lead to "best practices" such as domain sharding, resource inlining, image spriting, resource bundling, etc., but these are in fact just clever hacks to workaround HTTP/1.1 protocol limitations.
With HTTP/2 things are different because HTTP/2 is multiplexed.
Even without HTTP/2 Push, the multiplexing feature of HTTP/2 renders all those hacks useless, because now you can request hundreds of resources in parallel using a single TCP connection.
With HTTP/2, the same 30 resources would require just 1 roundtrip to be downloaded, giving you a straight 5x performance increase in that operation (that typically dominates the page rendering time).
Given that the trend of web content is to be richer, it will have more resources; the more resources, the better HTTP/2 will perform with respect to HTTP/1.1.
On top of HTTP/2 multiplexing, you have HTTP/2 Push.
Without HTTP/2 Push, the browser has to request the primary resource (the *.html page), download it, parse it, and then arrange to download the 30 resources referenced by the primary resource.
HTTP/2 Push allows you to get the 30 resources while you are requesting the primary resource that references them, saving one more roundtrip, again thanks to the HTTP/2 multiplexing.
It is really the multiplexing feature of HTTP/2 that allows you to forget about resource bundling.
You can look at the slides of the HTTP/2 session that I gave at various conferences.
HTTP/2 supports "server push" which obsoletes bundling of resources. So, yes, if you are you using HTTP/2, bundling would actually be an anti-pattern.
For more info check this: https://www.igvita.com/2013/06/12/innovating-with-http-2.0-server-push/
Bundling is doing a lot in a modern JavaScript build.
HTTP/2 only addresses the optimisation of minimising the amount of requests between the client and server by making the cost of additional requests much cheaper than with HTTP/1
But bundling today is not only about minimising the count of requests between the client and the server. Two other relevant aspects are:
Tree Shaking: Modern bundlers like WebPack and Rollup can eliminate unused code (even from 3rd party libraries).
Compression: Bigger JavaScript bundles can be better compressed (gzip, zopfli ...)
Also HTTP/2 server push can waste bandwidth by pushing resources that the browser does not need, because he still has them in the cache.
Two good posts about the topic are:
http://engineering.khanacademy.org/posts/js-packaging-http2.htm
https://www.contentful.com/blog/2017/04/04/es6-modules-support-lands-in-browsers-is-it-time-to-rethink-bundling/
Both those posts come to the conclusion that "build processes are here to stay".
Bundling is still useful if your website is
Served on HTTP (HTTP 2.0 requires HTTPS)
Hosted by a server that does not support ALPN and HTTP 2.
Required to support old browsers (Sensitive and Legacy Systems)
Required to support both HTTP 1 and 2 (Graceful Degradation)
There are two HTTP 2.0 features that makes bundling obsolete:
HTTP 2.0 Multiplexing and Concurrency (allows multiple resources to be requested on a single TCP connection)
HTTP 2.0 Server Push (Server push allows the server to preemptively push the responses it thinks the client will need into the client's cache)
PS: Bundling is not the lone optimization technique that would be eliminated by the insurgence of HTTP 2.0 features. Features like image spriting, domain sharding and resource inlining (Image embedding through data URIs) will be affected.
How HTTP 2.0 affects existing web optimization techniques

when a web application serves a video file, is it stream automatically? what options are there?

when a web application serves a video file, is it stream automatically? what options are there?
Your question is vague. The behavior you get will depend on what Content-Type header your "web application" (or container) gives your file.
Different types will do different things depending the browser.
http://en.wikipedia.org/wiki/Internet_media_type
If you want to check what headers your application is sending, use Firefox + Live HTTP Headers.
By default, video files are sent from IIS to clients in burst mode, at the highest bandwidth that the connection can support.
There is an extension for IIS that provides support for streaming and bit rate throttling:
http://www.iis.net/extensions/BitRateThrottling
It support many media types, but not all of them--although it is extensible.

http streaming

is http streaming possible without using any streaming servers?
Of course. You can output and flush, it gets to client before you end the script, thus it's streaming.
For live streaming, only segmented, like Apple HLS, other variants of segmented HLS (like OSMF) are not widely supported at the moment.
IIS from microsoft can also do smooth streaming (and Apple HLS as well).
Apple HLS can be supported on any web server when you pre-segment stream to chunks and just upload to web server path.
For VoD streaming, there is lot's of modules for all web servers.
Yes, although libraries have varying level of support. What needs to be used is "http chunking", such that lib does not try to buffer the whole request/response in memory (to computed the content length header) and instead indicate content comes in chunks.
Yes,not only its possible but has been implemented by various media server companies, only reason they still make usage of servers because of commercial purpose. Basically the content you want to stream should be divided into chunks/packets and then client machine can request those chunks via simple HTTP Get Requests.
Well if you have WebSockets available you can actually get quite low-latency streaming for low-fps scenarios by sending video frames as jpegs.
You can also send audio separately and play it using WebAudio on your browser. I imagine it could work for scenarios where you do not require perfect audio-video sync.
Another approach is to stream MPEG chunks through WebSockets, decode them in JS using jsmpeg and render to a canvas. You can find more here (video-only):
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets
yes, the answer to your problem with http streaming is MPEG-DASH tech

Resources