Maximum Cookie Size of current browsers (Year 2018) - http

From the django docs:
Both RFC 2109 and RFC 6265 state that user agents should support cookies of at least 4096 bytes. For many browsers this is also the maximum size.
Source: https://docs.djangoproject.com/en/2.1/ref/request-response/
Is this still valid today?
What is the maximum cookie size of current browsers?

The cookie spec definition in RFC6265 (April 2011) is the current RFC (No new draft and no new RFC) and is supported by all major browsers (IE,Chrome,Opera,Firefox) today.
At least 4096 bytes for the entire cookie (as measured by the sum of all of the cookie names, values, and attributes).
At least 50 cookies per domain, provided they don't go over the above limit.
At least 3000 cookies total.
So all modern browsers support AT LEAST this. Any other limit values are a gamble
See 6.1. Limits in https://datatracker.ietf.org/doc/rfc6265/ for more details

You can test it out by setting and reading back a cookie size from JavaScript in an iteration if you are interested in modern browsers only.
That is what I was doing in the past. And this is exactly what this site is about, it also includes the limits by browsers.
But keep in mind that the matching cookies will travel with every HTTP requests so they could dramatically affect the perceived response time.

here is the detail which you can refer - http://browsercookielimits.iain.guru/
Typically, the following are allowed:
300 cookies in total
4096 bytes per cookie
20 cookies per domain
81920 bytes per domain*
Given 20 cookies of max size 4096 = 81920 bytes.

Related

How long can an ETag be (storing AWS S3 ETag in database)

Assuming I want to store an ETag in a database column, what length should I allocate?
As far as I can tell there is not limit on the length of an ETag in the spec (https://www.rfc-editor.org/rfc/rfc7232#section-2.3). Even if I use a varchar(max) technically someone could use more than 2billion characters in an ETag, but we know that's not realistic. We also know web servers will barf on more than a few KB in total headers (Maximum on HTTP header values?) so the limit is way lower than that.
Typically ETags are going to be hashes (don't have to be, '5' is a perfectly valid etag), so I'm thinking 64 bytes is a minimum (SHA512), 100 is probably 'safe'. But does anyone have a better limit? What have people seen in the wild?
(I actually only care about AWS S3 ETag values if someone has a answer for that specific case I'll take it)

Max value for cache control header in HTTP

I'm using Amazon S3 to serve static assets for my website. I want to have browsers cache these assets for as long as possible. What meta-data headers should I include with my assets
Cache-Control: max-age=???
Generally one year is advised as a standard max value. See RFC 2616:
To mark a response as "never expires," an origin server sends an
Expires date approximately one year from the time the response is
sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
year in the future.
Although that applies to the older expires standard, it makes sense to apply to cache-control too in the absence of any explicit standards guidance. It's as long as you should generally need anyway and picking any arbitrarily longer value could break some user-agents. So:
Cache-Control: max-age=31536000
Consider not storing it for "as long as possible," and instead settling for as long as reasonable. For instance, it's unlikely you'd need to cache it for longer than say 10 years...am I right?
The RFC discusses max-age here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3
Eric Lawrence says that prior to IE9, Internet Explorer would treat as stale any resource with a Cache-Control: max-age value over 2147483648 (2^31) seconds, approximately 68 years (http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx).
Other user agents will of course vary, so...try and choose a number that is unlikely (rather than likely!) to cause an overflow. Max-age greater than 31536000 (one year) makes little sense, and informally this is considered a reasonable maximum value.
The people who created the recommendation of maximum 1 year caching did not think it through properly.
First of all, if a visitor is being served an outdated cached file, then why would it provide any benefit to have it suddenly load a fresh version after 1 year? If a file has 1 year TTL, from a functional perspective, it obviously means that the file is not intended to be changed at all.
So why would one need more than 1 year?
1) Why not? It doesn't server any purpose to tell the visitors browser "hey, this file is 1 year old, it might be an idea to check if it has been updated".
2) CDN services. Most content delivery networks use the cache header to decide how long to serve a file efficiently from the edge server. If you have 1 year cache control for the files, it will at some point start re-requesting non-changed files from the origin server, and edge-cache will need to be entirely re-populated, causing slower loads for client, and unnecessary calls to the origin.
What is the point of having max 1 year? What browsers will choke on an amount set higher than 31536000?

How many A records can fit in a single DNS response?

What are the size limits on DNS responses? For instance how many 'A' resource records can be present in a single DNS response? The DNS response should still be cache-able.
According to this RFC, the limit is based on the UDP message size limit, which is 512 octets. The EDNS standard supports a negotiated response with a virtually unlimited response size, but at the time of that writing (March 2011), only 65% of clients supported it (which means you can't really rely on it)
The largest guaranteed supported DNS message size is 512 bytes.
Of those, 12 are used up by the header (see §4.1.1 of RFC 1035).
The Question Section appears next, but is of variable length - specifically it'll be:
the domain name (in wire format)
two bytes each for QTYPE and QCLASS
Hence the longer your domain name is, the less room you have left over for answers.
Assuming that label compression is used (§4.1.4), each A record will require:
two bytes for the compression pointer
two bytes each for TYPE and CLASS
four bytes for the TTL
two bytes for the RDLENGTH
four bytes for the A record data itself
i.e. 16 bytes for each A record (§4.1.3).
You should if possible also include your NS records in the Authority Section.
Given all that, you might squeeze around 25 records into one response.

Splitting and recombining a large string in Cookies using ASP.NET

I have a large string that I want to save in a cookie, however I don't know what the best practices are for max string length per cookie, and max cookie count.
What logic should I use to split the string and later combine a set of cookies?
(Microsoft ADFS and perhaps Siteminder do this technique so I would be interested in what thier implementation is)
Cookies is something that handle by browsers, so each browser have different limits.
Split the cookie can help only temporary because there is also a limit to the total cookies data for each site, but also you add an overhead on the data transfer on each page
The limits for each browser per cookie:
Internet Explorer handle max cookie of about 3904 bytes
Mozilla Firefox handle max cookie of about 3136 bytes
When I make some tests on Chrome, the chrome crash inside with a large cookie, and no message appear nether the page.
Now both Netscape and Microsoft have measures in place that limit the number of cookies base on RFC 2109 limitations of total cookies count to 300 ref: http://www.cookiecentral.com/faq/#2.5
This is done for many reasons, one of them is the hacking, imaging a site that go and upload a full video on the cookies :) and full up your hard disk with it...
I say that the best practices is to keep a small cookie reference on the browser, and connect it with the real data on the server. The smaller the better from all aspects.
How to make your tests for the cookie, you can make a code like that.
if(Request.Cookies["cookieTest"] == null)
Request.Cookies["cookieTest"].Value = "more text into cookie";
else
Request.Cookies["cookieTest"].Value += "more text into cookie";
// check now the size
Responce.Write(Request.Cookies["cookieTest"].Value.Length);
My experience show many random unpredicted problems when you try to use uncontrolled large data on cookies. I have hear many times support say: Clear your cookies and try again :)

What is the optimum limit for URL length? 100, 200+

I have an ASP.Net 3.5 platform and windows 2003 server with all the updates.
There is a limit with .Net that it cannot handle more than 260 characters. Moreover if you look it up on web, you will find that IE 6 fails to work if it is not patched at above 100 charcters.
I want to have the rewrite path module to be supported on maximum number of browsers, so I am looking for an acceptable limit to which I can create verbose URL's.
A Url is path + querystring, and the linked article only talks about limiting the path. Therefore, if you're using asp.net, don't exceed a path of 260 characters. Less than 260 will always work, and asp.net has no troubles with long querystrings.
http://somewhere.com/directory/filename.aspx?id=1234
^^^^^^^- querystring
^^^^^^^^^^^^^^^^^^^^^^^^ -------- path
Typically the issue is with the browser. Long ago I did tests and recall that many browsers support 4k url's, except for IE which limits it to 2083, so for all practical purposes, limit it to 2083. I don't know if IE7 and 8 have the limitation, but if you're going to broad compatibility, you need to go for the lowest common denominator.
There is no length limit specified by the W3C, but look here for practical limits
http://www.boutell.com/newfaq/misc/urllength.html
pick your own limit from that.
The default limit in IIS is 16,384 characters
But IE doesn't support more than 2083
More info at link
This article gives the limits imposed by various browsers. It seems that IE limits the URL to 2083 chars, so you should probably stay under that if any of your users are on IE.
Define "optimum" for your application.
The HTTP standard has a limit (it depends on your application):
The HTTP protocol does not place any
a priori limit on the length of a URI.
Servers MUST be able to handle the URI
of any resource they serve, and SHOULD
be able to handle URIs of unbounded
length if they provide GET-based forms
that could generate such URIs. A
server SHOULD return 414 (Request-URI
Too Long) status if a URI is longer
than the server can handle (see
section 10.4.15).
Note: Servers ought to be cautious about depending on URI
lengths above 255 bytes, because some older client or proxy
implementations might not properly support these lengths.
So the question is - what is the limit of your program, or what is the maximum resource identifier size your program needs to perform all its functionality?
Your program should have a natural limit.
If it doesn't you might as well stick it as 16k, as you don't have enough information to define the problem.
-Adam
Short ;-)
The problem is that every web server and every browser has own ideas how long the maximum is. The RFC for the HTTP protocol gives no maximum length. IE limits the get to 2083 characters, the path itself may be at most 2,048 characters. However, this limit is not universal. Firefox claims to support at least up to 65,536, however some people verified that on some platforms even 100,000 characters work. Safari is above 80,000 (tested). Apache server on the other hand has a limit of 4,000. Microsofts Internet Information Server has one being 16,384 (but it is configurable).
My recommendation is to stay below 2'000 characters in any case. This is not guaranteed to work with every browser in the world (especially not older ones), but it will work with all modern browsers. Further I recommend to use POST wherever possible (e.g. avoid using GET for FORM submits - if some users want to simulate a FORM submit via GET, make sure your application supports the desired parameters either via POST or via GET, but when you submit the page yourself via a button or JS, prefer POST over GET).
I think the RFC says 4096 chars but IE truncates down to 2083 characters. Stay well under that to be safe.
Practically, shorter URLs are friendlier.
More information is needed but for normal situations I would say try to keep it under 150 for sure. If for nothing else than pure ascetics, I hate when someone sends me a GI-NORMOUS link...
Are you passing values through the query string? I assume that is why you asked, correct?
What is "optimum" anyway?
GET requests can be several kB in length, so this is entirely subjective.
I'd say - stay within the address bar length of a maximized 1024x768 window to be user friendly.
If you're trying to get people to remember the URL, I wouldn't go more than 60. Use words if possible, because it's easier to remember "www.example.com/this-is-the-url" than "www.example.com/179264". If you're trying to get the page indexed, you could probably go more. The spiders look for words in the title too, and some people may be more likely to click on the link if the URL looks readable.
When you say "Optimum", I think "Easily Accessible To Users", in which case, I think the shorter the URL, the better. I would think 20-30 characters maximum, in that case.

Resources