What's Download and Upload Packet? - http

Note: My question remains un-answered.
After hours of reading I'm confused about download/upload speeds.
Let's suppose I sent an http request to fetch an image, all websites mention this as "Download" but why?
My computer is sending the request and getting a response back, so it's both uploading (sending packet) and downloading (getting packet). Thus it's not pure Download.
Plus, I read that internet provides prioterize download over upload but how? given a packet from A to B how can they decide if it's an upload or download?
From A's perspective it's Upload, but according to B it's Download...

Where did you read about prioritizing download speed over upload? ISPs can only prioritize speed traffic by IP (to move traffic to another chanel out of shaper or to make this traffic use another shaper's rules) or by shaper set rules for UDP, Vo-Ip or traffic per VLAN/port. Web sites can optimize data for displaying content cached and compressed, for example. Or to use the uploading feature on other upstream servers. But you, as a user, upload to the server RAW data (an uncompressed set of bytes). For managing speeds from an ISP, the ISP must monitor each packet from each user. And then it can limit speeds either by packet size (but it will work 50/50 'cos files are uploaded by a few requests) or by meta data from a packet. It is a massive load for all infrastructure. And accordingly, it is a very costly process for businesses.

Related

Save bandwidth for HTTP partially download case

I have such a special requirement:
Download a big file from server with HTTP.
If detect network not so good, then giveup current download but download another smaller file.
But per my test, it seems if I send a HTTP GET to server, it will download the file continuously even if I only read 1024 bytes, then I try to close the connection after detect bandwidth is low, but actually download bytes is larger than what I request.
To save bandwidth, I wish server do not send more data then my request, but it sounds like impossible. Then what's the actually mechanism to stop send data to client if we only request partially? e.g. 1024 bytes? If I do not disconnect, server will sent data to client till the whole file is finished?

What is the recommended HTTP POST content length?

I have several clients that constantly post data to a REST service. REST service is put behind a network load balancer. Each client sends 100 - 500 MB a day and I need to support 500+ clients.
I can POST either very large packets, this will reduce overhead for TCP/IP session set up and HTTP headers. This will, however, firmly tie one client to a particular server and limit my scalability options. Alternatively, I can send small HTTP packets, which I can load balance well, but I will get more overhead for TCP/IP session set up and HTTP headers.
What is the recommended packet size for HTTP POST? Or how can I calculate one for my environment?
There is no recommended size.
While HTTP POST size is not constrained by the RFCs, since HTTP is a commodity protocol implementing request / response type messaging, most of the infrastructure is configured around the idea that TCP connections are not particularly long lasting / does not carry significant amounts of data. i.e. there will be factors outside your control which may impact the service - although HTTP supports range requests for responses, there is no corollary for requests.
You can get around a lot of these (although not all) by using HTTPS. However you still need to think about how you detect/manage outages - are you happy to wait for a TCP timeout?
With 500+ clients presumably using the system quite heavily, the congestion avoidance limits shouldn't be a problem - whether TCP window scaling is likely to be an issue depends on how the system is used. HTTP handshakes should not be an issue unless you restrict the request size to something silly.
If the service is highly dependant on clients pushing lots of data on to your server, then I'd encourage you to look at parsing the data on the client (given the volume, presumably it's coming from files - implying a signed java applet or javascript with UniversalBrowserRead privilege) then sending it over a bi-directional communication channel (e.g. websocket).
Leaving that aside for now, the only way you can find out what the route between your clients and your server will support is to measure it - and monitor it. I would expect that a 2Mb upload size would work pretty much anywhere, while a 10Mb size would work most of the time within the US or Europe - and that you could probably increase this to 50Mb as long as there's no mobile clients.
But if you want to maintain the effectiveness of the service you'll need to monitor bandwidth, packet loss and lost connections.

find out connection speed on http request?

is it possible to find out the connection speed of the client when it requests a page on my website.
i want to serve video files but depending on how fast the clients network is i would like to serve higher or lower quality videos. google analytics is showing me the clients connection types, how can i find out what kind of network the visitor is connected to?
thx
No, there's no feasible way to detect this server-side short of monitoring the network stream's send buffer while streaming something. If you can switch quality mid-stream, this is a viable approach because if the user's Internet connection suddenly gets burdened by a download you could detect this and switch to a lower-quality stream.
But if you just want to detect the speed initially, you'd be better off doing this detection on the client and sending the results to the server with the video request.
Assign each request a token /videos/data.flv?token=uuid123, and calculate amount of data your webserver sends for this token per second (possible check for multiple tokens at one username at a time period). You can do this with Apache sources and APR.

video streaming

I am designing an application for streaming video.I have developed a model in which a server wait for incoming request.The server it self is serving to a good number of clients and it can't afford to serve any more clients.Now when the new connection comes,the server chooses from among it's clients a candidate client who will serve the request of the incoming client.Now the thing is that this choice should be very intelligent.Now I am using various heuristic like bandwidth of the selected client,it's location,distance from the requesting client to come at a decision.Now my question is,IS THERE AVAILABLE ANY TOOL TO FIND OUT BANDWIDTH,LOCATION of a host,and DISTANCE(my be in hop number)?for hop number I can use traceroute but that will be too expensive as it take long time sending reply from every intermediate router.
Any help will be appreciated.
Thanks!
Use traceroute to find number of hops.
Use dnsstuff APIs to find location.
Do some TCP packet exchange to understand bandwidth of a client. You will get highest and lowest bandwidth client relatively.
If client is going to serve older video, take amount of data as consideration (i.e. Bigger the content, higher the change of streaming correct data)

How do Download Managers download huge files on HTTP without multiple requests?

I was downloading a 200MB file yesterday with FlashGet in the statistics it showed that it was using the HTTP1.1 protocol.
I was under the impression that HTTP is a request-response protocol and most generally used for web pages weighing a few KiB...I don't quite understand how it can download MB's or GB's of data and that too simultaneously through 5(or more) different streams.
HTTP/1.1 has a "Range" header that can specify what part of a file to transfer over the connection. The download manager can make multiple connections, specifying different ranges to transfer. It would then combine the chunks together to build the full file.
There is no size limit in http. It is used for web pages, but it is also used to deliver a huge majority of the content on the Internet. It's more a matter of bandwidth that limits sizes, not the protocol itself. And of course, this was more of a limit in the early days. (and, I suppose, those still on dial-up)
These links might help:
HTTP
HTTP Persistent Connections
Chunked Transfer Encoding

Resources