Http maximum GET/POST query parameters supported - http

I read many questions about the limit of the URL in HTTP still not able to find the answer to how many parameters are maximum supported in HTTP
What is the maximum number of parameters supported in HTTP by parameters i mean:
https://www.google.com/search?q=cookies&ie=utf-8&oe=utf-8
Here there are 3 parameters:
q ie oe and their corresponding values.

The query string is under authority of RFC 3986, section 3.4 which does not specify any limit with the exception of the allowed characters. You will also struggle to find any limitation on the logical number of parameters, since there has never been a real specification on the format; what you find in there is rather a best-practice that has been highly influenced by what CGI is doing. So the number of parameters is very much bound by what the client or server is willing to transfer/accept (the lower bound wins, obviously). Per this answer, you can find a rough estimate here.

There is no limit for parameters number, its all about data size how many KBs you are sending using your GET request, however this value is configurable from web server side (Apache, Tomact, ..etc).
The default limit for the length of the request line is 8190 bytes in apache and this value could be changed to increase it or decrease it.

Related

Generating a multipart/byterange response without scanning the parts ahead of sending

I would like to generate a multipart byte range response. Is there a way for me to do it without scanning each segment I am about to send out, since I need to generate multipart boundary strings?
For example, I can have a user request a byterange that would have me fetch and scan 2GB of data, which in my case involves me loading that data into my (slow) VM as strings and so forth. Ideally I would like to simply state in the response that a part has a length of a certain number of bytes, and be done with it. Is there any tooling that could provide me with this option? I see that many developers just grab a UUID as the boundary and are probably willing to risk a tiny probability that it will appear somewhere within the part, but that risk seems to be small enough multiple people are taking it?
To explain in more detail: scanning the parts ahead of time (before generating the response) is not really feasible in my case since I need to fetch them via HTTP from an upstream service. This means that I effectively have to prefetch the entire part first to compute a non-matching multipart boundary, and only then can I splice that part into the response.
Assuming the data can be arbitrary, I don’t see how you could guarantee absence of collisions without scanning the data.
If the format of the data is very limited (like... base 64 encoded?), you may be able to pick a boundary that is known to be an illegal sequence of bytes in that format.
Even if your boundary does collide with the data, it must be followed by headers such as Content-Range, which is even more improbable, so the client is likely to treat it as an error rather than consume the wrong data.
Major Web servers use very simple strategies. Apache grabs 8 random bytes at startup and renders them in hexadecimal. nginx uses a sequential counter left-padded with zeroes.
UUIDs are designed to avoid collisions with other UUIDs, not with arbitrary data. A UUID is no more likely to be a good boundary than a completely random string of the same length. Moreover, some UUID variants include information that you may not want to disclose, such as your machine’s MAC address.
Ideally I would like to simply state in the response that a part has a length of a certain number of bytes, and be done with it. Is there any tooling that could provide me with this option?
Maybe you can avoid supporting multiple ranges and simply tell the clients to request each range separately. In that case, you don’t use the multipart format, so there is no problem.
If you do want to send multiple ranges in one response, then RFC 7233 requires the multipart format, which requires the boundary string.
You can, of course, invent your own mechanism instead of that of RFC 7233. In that case:
You cannot use 206 (Partial Content). You must use 200 (OK) or some other applicable status code.
You cannot use the multipart/byteranges media type. You must come up with your own media type.
You cannot use the Range request header.
Because a 200 (OK) response to a GET request is supposed to carry a (full) representation of the resource, you must do one of the following:
encode the requested ranges in the URL; or
use something like POST instead of GET; or
use a custom, non-standard status code instead of 200 (OK); or
(not sure if this is a correct approach) use media type parameters, send them in Accept, and add Accept to Vary.
The chunked transfer coding may be useful, but you cannot rely on it alone, because it is a property of the connection, not of the payload.

Elevation service UNKNOWN_ERROR

I'm having difficulty with the Google maps V3 JavaScript elevation service.
According a google groups posting ( https://groups.google.com/forum/#!msg/google-maps-js-api-v3/Z6uh9HwZD_k/G1ur1SJN7fkJ ), it appears that if you use getElevationAlongPath() it compresses and sends the entire path to the Google server as an Ajax GET request and subsamples it on their server. This means that if you have a large number of path segments the encoded URL exceeds the maximum URL length and the request fails with UNKNOWN_ERROR.
Can anyone confirm that this is a URL length issue?
I've tried doing my own subsampling along the path and sending just the points I want elevation data for as a getElevationForLocations() request. This does seem to be an improvement, but I'm still getting some UNKNOWN_ERROR responses. These occur unpredictably. Sometimes a request with 400 points returns successfully. Other requests will fail with only 300 points passed. I'm guessing that this still a problem with URL length (presuming getElevationForLocations() also sends URL-encoded data to Google).
The documentation says that "you may pass any number of multiple coordinates within an array, as long as you don't exceed the service quotas." This doesn't seem to be the case.
Does anyone have any suggestions for a reliable way to get a large number of elevation data points (500?) from a long path?
Thanks,
Colin
After a bit more digging, this seems to be the situation.
The JavaScript API for elevation uses the HTTP elevation service behind the scenes. The HTTP elevation service docs do say that requests are limited to 2048 characters. However, if you're using the HTTP service directly, you build you're own URLs. This means you can check the length before sending. If you use the JavaScript API, the URL is built for you, but the API code doesn't check the URL length before sending.
The call end-point URL and the necessary parameters take up 78 characters leaving 1970 for the encoded points.
This is where it gets messy. The number of characters in an encoded point varies with the size and precision of the lat and lng values. Generally, somewhere between 8 and 12 characters per point. An added complication is that some of the characters used in the path encoding may need URL-encoding - further increasing the number of characters needed per point by an unknown, but potentially significant amount (2 extra characters for each path character in need of URL encoding).
All of these complications mean that its theoretically possible for a call to result too long a URL with just 55 points - very, very unlikely though. A safe limit is probably 150 points (but this may still fail occasionally). 200 should work most of the time. 250 should be about the maximum.
In reality, from a small number of tests:
- 200 worked every time
- 300 usually works
- 400 sometimes works
The discrepancy between the calculation and the tests suggests that the JavaScript API may be doing some further form of compression or I've got something wrong in calcs?
Your suspicions are correct, this is a URL length issue. If you have Chrome's Developer Tools open when you submit the request you'll see an HTTP 414 (Request-URI Too Large) error. The URL is around 3000 characters which is about 1000 too many (2048 is a common max url length).
Internally the Google Maps API converts all those points to what looks like an encoded polyline which helps compress that data down, but it's clearly not enough for this really long path. It might be worth splitting the request up into multiple segments when you know your going to be including more than N points (I'd experiment around with N to see what works).

What is the maximum length of html textbox

can anyone help me with the length of maximum characters that can be contain in a normal HTML text box....
As to the HTML side, when the maxlength attribute is not specified, then the maximum length of the input value is unlimited. However, if you're sending the request as GET instead of POST, then the limit will depend on the webbrowser and webserver used. The HTTP 1.1 specification even warns about this, here's an extract of chapter 3.2.1:
Note: Servers ought to be cautious about depending on URI lengths
above 255 bytes, because some older client or proxy
implementations might not properly support these lengths.
As to the webbrowsers, the practical limit is in Firefox about 8KB, in Opera about 4KB and in IE and Safari about 2KB. So the total length of all inputs should not exceed this if you want a succesful processing. As to the webservers, most have a configureable limit of 8KB. When the limit is exceeded, then it will often just be truncated, but some webservers may send a HTTP 414 error.
When you're sending the request as POST, then the limit depends on the server config. Often it's around 2GB. When it's exceeded, the server will often return a HTTP 500 error.
Default maxlength is unlimited for a <input type='text'/>. You may optionally provide this value to constrain input (but there's no guarantees the browser will enforce the rule).
A <textarea> does not support a maxlength so unlimited characters are accepted for input.
(ref: https://www.w3.org/MarkUp/HTMLPlus/htmlplus_41.html)
RE: Long string breaking during submit
There can be a maximum size to the amount of data submitted from a form when using the method get (the default if not specified). It's only a can because many browsers allow many more characters now. If you use a form with the post method, there is no maximum to the amount of data submitted.
In HTML4, the maxlength attribute is only supported on the input element. HTML5 extends this to allow it on textarea as well. A quick test works in Firefox 4 and WebKit, but not Firefox 3 or Opera. If you need support for HTML4, use JavaScript to manually limit the length.

Google Translation API

Has anyone used Google translation API ? What is the max length limit for using it?
The limit was 500... now it is 5000 chars.
source
500 characters
source
At the moment, the throttle limit is 100,000 characters per day. Looks like you can apply to have that limit increased/removed.
I've used it to translate Japanese to English.
I don't believe the 500 char limit is true if you use http://code.google.com/p/jquery-translate/, but one thing that is true is you're restricted as to the number of requests you can make within a certain period of time. They also try to detect whether or not you're sending a lot of requests with a similar period, almost like a mini "denial of service" attack.
So when I did this I wrote a client with a random length sleep between requests. I also ran it on a grid so all the requests didn't come from a single IP address.
I had to translate ~2000 Java messages from a resource bundle from Japanese to English. It worked out pretty nicely, as long as the text was single words. Longer phrases with context came out awkwardly.
Please have look at this link it will give the correct answer at the bottom of the page.
https://developers.google.com/translate/v2/faq
What is the maximum number of characters per request?
The maximum size of each text to be translated is 5000 characters, not including any HTML tags.
You can send source strings of up to 5,000 characters, but there are a
few provisos that are sometimes lost.
You can only send the 5,000 characters via the POST method.
If you use GET method, you are limited to 2,000-character length limit on urls. If a url is longer than that, Google's servers will just reject it.
Note: 2,000-character limit including the path and the rest
of the query string as well + you must count uri encoding (for instance every space becomes a %20, every quotation
mark a %22)
The Cloud Translation API is optimized for translating of smaller requests. The recommended maximum length for each request is 5K characters (code points). However, the more characters that you include, the higher the response latency. For Cloud Translation - Advanced, the maximum number of code points for a single request is 30K. Cloud Translation - Basic has a maximum request size of 100K bytes.
https://cloud.google.com/translate/quotas

What is the optimum limit for URL length? 100, 200+

I have an ASP.Net 3.5 platform and windows 2003 server with all the updates.
There is a limit with .Net that it cannot handle more than 260 characters. Moreover if you look it up on web, you will find that IE 6 fails to work if it is not patched at above 100 charcters.
I want to have the rewrite path module to be supported on maximum number of browsers, so I am looking for an acceptable limit to which I can create verbose URL's.
A Url is path + querystring, and the linked article only talks about limiting the path. Therefore, if you're using asp.net, don't exceed a path of 260 characters. Less than 260 will always work, and asp.net has no troubles with long querystrings.
http://somewhere.com/directory/filename.aspx?id=1234
^^^^^^^- querystring
^^^^^^^^^^^^^^^^^^^^^^^^ -------- path
Typically the issue is with the browser. Long ago I did tests and recall that many browsers support 4k url's, except for IE which limits it to 2083, so for all practical purposes, limit it to 2083. I don't know if IE7 and 8 have the limitation, but if you're going to broad compatibility, you need to go for the lowest common denominator.
There is no length limit specified by the W3C, but look here for practical limits
http://www.boutell.com/newfaq/misc/urllength.html
pick your own limit from that.
The default limit in IIS is 16,384 characters
But IE doesn't support more than 2083
More info at link
This article gives the limits imposed by various browsers. It seems that IE limits the URL to 2083 chars, so you should probably stay under that if any of your users are on IE.
Define "optimum" for your application.
The HTTP standard has a limit (it depends on your application):
The HTTP protocol does not place any
a priori limit on the length of a URI.
Servers MUST be able to handle the URI
of any resource they serve, and SHOULD
be able to handle URIs of unbounded
length if they provide GET-based forms
that could generate such URIs. A
server SHOULD return 414 (Request-URI
Too Long) status if a URI is longer
than the server can handle (see
section 10.4.15).
Note: Servers ought to be cautious about depending on URI
lengths above 255 bytes, because some older client or proxy
implementations might not properly support these lengths.
So the question is - what is the limit of your program, or what is the maximum resource identifier size your program needs to perform all its functionality?
Your program should have a natural limit.
If it doesn't you might as well stick it as 16k, as you don't have enough information to define the problem.
-Adam
Short ;-)
The problem is that every web server and every browser has own ideas how long the maximum is. The RFC for the HTTP protocol gives no maximum length. IE limits the get to 2083 characters, the path itself may be at most 2,048 characters. However, this limit is not universal. Firefox claims to support at least up to 65,536, however some people verified that on some platforms even 100,000 characters work. Safari is above 80,000 (tested). Apache server on the other hand has a limit of 4,000. Microsofts Internet Information Server has one being 16,384 (but it is configurable).
My recommendation is to stay below 2'000 characters in any case. This is not guaranteed to work with every browser in the world (especially not older ones), but it will work with all modern browsers. Further I recommend to use POST wherever possible (e.g. avoid using GET for FORM submits - if some users want to simulate a FORM submit via GET, make sure your application supports the desired parameters either via POST or via GET, but when you submit the page yourself via a button or JS, prefer POST over GET).
I think the RFC says 4096 chars but IE truncates down to 2083 characters. Stay well under that to be safe.
Practically, shorter URLs are friendlier.
More information is needed but for normal situations I would say try to keep it under 150 for sure. If for nothing else than pure ascetics, I hate when someone sends me a GI-NORMOUS link...
Are you passing values through the query string? I assume that is why you asked, correct?
What is "optimum" anyway?
GET requests can be several kB in length, so this is entirely subjective.
I'd say - stay within the address bar length of a maximized 1024x768 window to be user friendly.
If you're trying to get people to remember the URL, I wouldn't go more than 60. Use words if possible, because it's easier to remember "www.example.com/this-is-the-url" than "www.example.com/179264". If you're trying to get the page indexed, you could probably go more. The spiders look for words in the title too, and some people may be more likely to click on the link if the URL looks readable.
When you say "Optimum", I think "Easily Accessible To Users", in which case, I think the shorter the URL, the better. I would think 20-30 characters maximum, in that case.

Resources