What is the maximum size of a cookie file? - asp.net

Are there any limitations on the size of the cookie? Also, is this browser dependent?

The "official" maximum size is 4KB, but I would prefer to keep it well under that: no more than a few hundred bytes, tops.
The reason is that cookies are transmitted from the client to the server with every single request - even when requesting images, css and js files (if they reside on the same host something you should avoid in general, but for small sites may not be worth the bother). That means that you'll be requiring the client to transmit 4KB for every request - remembering also that most consumer broadband has much slower upload speed than download speed.

Importantly the official cookie spec RFC 2965 states the minimums browser should adhere to:
5.3 Implementation Limits Practical user agent implementations have limits
on the number and size of cookies that
they can store. In general, user
agents' cookie support should have no
fixed limits. They should strive to
store as many frequently-used cookies
as possible. Furthermore, general-use
user agents SHOULD provide each of the
following minimum capabilities
individually, although not necessarily
simultaneously:
at least 300 cookies
at least 4096 bytes per cookie (as measured by the characters that
comprise the cookie non-terminal in
the syntax description of the
Set-Cookie2 header, and as received in
the Set-Cookie2 header)
at least 20
cookies per unique host or domain name
User agents created for specific
purposes or for limited-capacity
devices SHOULD provide at least 20
cookies of 4096 bytes, to ensure that
the user can interact with a
session-based origin server.
The
information in a Set-Cookie2 response
header MUST be retained in its
entirety. If for some reason there is
inadequate space to store the cookie,
it MUST be discarded, not truncated.
Applications should use as few and as
small cookies as possible, and they
should cope gracefully with the loss
of a cookie.
Read more:
http://www.faqs.org/rfcs/rfc2965.html#ixzz0rjy5CJQa
From the cookie FAQ:
Microsoft saves cookies into the
"Temporary Internet Files" folder, a
system folder that you can set the
maximum size of (the default is 2% of
your hard drive).
In any event, remember that most
cookie files are 4KB or smaller, so
you would need about a million cookies
to fill up a 4GB drive. This is
incredibly unlikely.
You'll see the 4kb limit reference around the internet along with other useful stats.

4kb = 4096 bytes
If I recall correctly, independent of browser. See Can cookies get too big.

Related

Is it possible to transfer a file through CoAP?

Recently, I am doing a project and I am trying to transfer a json file to the CoAP server. I put some random values in key:value pairs such as:
{
key1: value1,
key2: [value21, value22, value23]
}
Questions:
CoAP is pretty much similar to HTTP. So, like HTTP, is it possible to transfer a json file through CoAP using POST/PUT method? If it is possible, what is the recommended directory location to put the uploaded file into the server (resource directory)?
Update:
The actual file size is about to 152.8 kB.
You can transfer arbitary JSON files using CoAP POST/PUT. Which directory would be writable depends fully on the server.
Note that for a file of that size, transfer times would be considerably longer than with HTTP, as packages are sent in lock-step (putting the first 1kB, response, next 1kB – whereas HTTP has a TCP window).
For a first shot, you may try out eclipse/californium's "simple-fileserver-example".
cf-simple-fileserver
The supports the read (GET) and uses option block 2 for that.
If you go deeper and leave the laboratory, RFC7959 blockwise may be faced several issues.
coap usually assumes, that the endpoints are identified by their ip-address (and
port). Though a blockwise transfer may last longer, that assumption may get broken. If the client is faced such a address change, a block option 2 (GET) may work, but for block option 1 (PUT), that would require special preparation.
Though such a blockwise transfer tends to last longer, it may get paused due to temporary transmission issus. That requires then a "resumption or fail" strategy. Also here GET is much easier than PUT.
Basic transmission issues on crashes. In my experience, blockwise comes with many blocks and so many MID are in use in a short period of time. If a client crashs and select a random MID on startup, the probability of an unaware MID clash is rather high. Depending on the coap servers deduplication implementation (strict according RFC7252 or advanced in awareness of that), your client may require a strategy to escape the situation, where the server retransmits unrelated messages just based on MIDs. My experience from that time was, "analyse what your get, if it smells, wait for the 247s :-)". Your client may also save the last used MID to overcome that or use a special/separate "blockwise endpoint" with disabled deduplication.
IP. FMPOV some have seen the issues left to the implementation and started to fill patents. That may require attention as well.
All together: If you use bockwise for payload of sometimes some K bytes, my experience is not that bad. But if you regulary transfer more, coap may be not the right choice.

maximum http request in a page

How many http request does a browser can handle in a single html page.
Their is a popular saying that browser can handle only a certain http request from a single domain and so its better to create static domain(cdn). so that http request can be shared between the 2 domains.
q1)How many http request can a browser handle in a single html page or atleast the saturation point(say 1000 requests)?
q2)How many http request from a single domain name can a browser render(say 100 from the same domain name)?
also any suggestions for best practices!!!
Section 8.1.4 of the HTTP/1.1 RFC says a "single-user client SHOULD NOT maintain more than 2 connections with any server or proxy."
However, the key word is "should"; most browsers use a different number. See this blog for a table of max connections per browser.
In theory there is no limit. But as the number of requests required to construct a page grows, the time taken for the page to be rendered increases. The relationship is not linear at low counts. Typically latency has a far bigger effect than bandwidth on actual throughput and there are mechanisms in HTTP to minimise the effect of this - such as keepalives and parallel requests. As Jon Grant says, there are limits on the number of concurrent requests.
A full answer to this question would fill a book - here's a good one.

Can HTTP POST be limitless?

What is the specification for maximum data size one can send with HTTP POST method?
Quite amazing how all answers talk about IIS, as if that were the only web server that mattered. Even back in 2010 when the question was asked, Apache had between 60% and 70% of the market share. Anyway,
The HTTP protocol does not specify a limit.
The POST method allows sending far more data than the GET method, which is limited by the URL length - about 2KB.
The maximum POST request body size is configured on the HTTP server and typically ranges from
1MB to 2GB
The HTTP client (browser or other user agent) can have its own limitations. Therefore, the maximum POST body request size is min(serverMaximumSize, clientMaximumSize).
Here are the POST body sizes for some of the more popular HTTP servers:
Nginx (largest web server market share as of April 2019) - default 1MB, no practical maximum (2**63)
Apache - maximum 2GB, no default documented
IIS - default 28.6MB for the request length, 2048 bytes for the query string; maximum undocumented
InfluxDB - default ~25MB, maximum undocumented
EDIT (2019) This answer is now pretty redundant but there is another answer with more relevant information.
It rather depends on the web server and web browser:
Internet explorer All versions 2GB-1
Mozilla Firefox All versions 2GB-1
IIS 1-5 2GB-1
IIS 6 4GB-1
Although IIS only support 200KB by default, the metabase needs amending to increase this.
http://www.motobit.com/help/scptutl/pa98.htm
The POST method itself does not have any limit on the size of data.
There is no limit according to the HTTP protocol itself, but implementations will have a practical upper limit. I have sent data exceeding 4 GB using POST to Apache, but some servers did have a limit of 4 GB at the time.
POST allows for an arbitrary length of data to be sent to a server, but there are limitations based on timeouts/bandwidth etc.
I think basically, it's safer to assume that it's not okay to send lots of data.
Different IIS web servers can process different amounts of data in the 'header', according to this (now deleted) article; http://classicasp.aspfaq.com/forms/what-is-the-limit-on-form/post-parameters.html;
Note that there is no limit on the
number of FORM elements you can pass
via POST, but only on the aggregate
size of all name/value pairs. While
GET is limited to as low as 1024
characters, POST data is limited to 2
MB on IIS 4.0, and 128 KB on IIS 5.0.
Each name/value is limited to 1024
characters, as imposed by the SGML
spec. Of course this does not apply to
files uploaded using
enctype='multipart/form-data' ... I
have had no problems uploading files
in the 90 - 100 MB range using IIS
5.0, aside from having to increase the server.scriptTimeout value as well as
my patience!
In an application I was developing I ran into what appeared to be a POST limit of about 2KB. It turned out to be that I was accidentally encoding the parameters into the URL instead of passing them in the body. So if you're running into a problem there, there is definitely a very small limit on the size of POST data you can send encoded into the URL.
HTTP may not have an upper limit, but webservers may have one. In ASP.NET there is a default accept-limit of 4 MB, but you (the developer/webmaster) can change that to be higher or lower.

Maximum on HTTP header values?

Is there an accepted maximum allowed size for HTTP headers? If so, what is it? If not, is this something that's server specific or is the accepted standard to allow headers of any size?
No, HTTP does not define any limit. However most web servers do limit size of headers they accept. For example in Apache default limit is 8KB, in IIS it's 16K. Server will return 413 Entity Too Large error if headers size exceeds that limit.
Related question: How big can a user agent string get?
As vartec says above, the HTTP spec does not define a limit, however many servers do by default. This means, practically speaking, the lower limit is 8K. For most servers, this limit applies to the sum of the request line and ALL header fields (so keep your cookies short).
Apache 2.0, 2.2: 8K
nginx: 4K - 8K
IIS: varies by version, 8K - 16K
Tomcat: varies by version, 8K - 48K (?!)
It's worth noting that nginx uses the system page size by default, which is 4K on most systems. You can check with this tiny program:
pagesize.c:
#include <unistd.h>
#include <stdio.h>
int main() {
int pageSize = getpagesize();
printf("Page size on your system = %i bytes\n", pageSize);
return 0;
}
Compile with gcc -o pagesize pagesize.c then run ./pagesize. My ubuntu server from Linode dutifully informs me the answer is 4k.
Here is the limit of most popular web server
Apache - 8K
Nginx - 4K-8K
IIS - 8K-16K
Tomcat - 8K – 48K
Node (<13) - 8K; (>13) - 16K
HTTP does not place a predefined limit on the length of each header
field or on the length of the header section as a whole, as described
in Section 2.5. Various ad hoc limitations on individual header
field length are found in practice, often depending on the specific
field semantics.
HTTP Header values are restricted by server implementations. Http specification doesn't restrict header size.
A server that receives a request header field, or set of fields,
larger than it wishes to process MUST respond with an appropriate 4xx
(Client Error) status code. Ignoring such header fields would
increase the server's vulnerability to request smuggling attacks
(Section 9.5).
Most servers will return 413 Entity Too Large or appropriate 4xx error when this happens.
A client MAY discard or truncate received header fields that are
larger than the client wishes to process if the field semantics are
such that the dropped value(s) can be safely ignored without changing
the message framing or response semantics.
Uncapped HTTP header size keeps the server exposed to attacks and can bring down its capacity to serve organic traffic.
Source
RFC 6265 dated 2011 prescribes specific limits on cookies.
https://www.rfc-editor.org/rfc/rfc6265
6.1. Limits
Practical user agent implementations have limits on the number and
size of cookies that they can store. General-use user agents SHOULD
provide each of the following minimum capabilities:
o At least 4096 bytes per cookie (as measured by the sum of the
length of the cookie's name, value, and attributes).
o At least 50 cookies per domain.
o At least 3000 cookies total.
Servers SHOULD use as few and as small cookies as possible to avoid
reaching these implementation limits and to minimize network
bandwidth due to the Cookie header being included in every request.
Servers SHOULD gracefully degrade if the user agent fails to return
one or more cookies in the Cookie header because the user agent might
evict any cookie at any time on orders from the user.
--
The intended audience of the RFC is what must be supported by a user-agent or a server. It appears that to tune your server to support what the browser allows you would need to configure 4096*50 as the limit. As the text that follows suggests, this does appear to be far in excess of what is needed for the typical web application. It would be useful to use the current limit and the RFC outlined upper limit and compare the memory and IO consequences of the higher configuration.
I also found that in some cases the reason for 502/400 in case of many headers could be because of a large number of headers without regard to size.
from the docs
tune.http.maxhdr
Sets the maximum number of headers in a request. When a request comes with a
number of headers greater than this value (including the first line), it is
rejected with a "400 Bad Request" status code. Similarly, too large responses
are blocked with "502 Bad Gateway". The default value is 101, which is enough
for all usages, considering that the widely deployed Apache server uses the
same limit. It can be useful to push this limit further to temporarily allow
a buggy application to work by the time it gets fixed. Keep in mind that each
new header consumes 32bits of memory for each session, so don't push this
limit too high.
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr
If you are going to use any DDOS provider like Akamai, they have a maximum limitation of 8k in the response header size. So essentially try to limit your response header size below 8k.

How can I determine the transfer rate ?

Can I determine from an ASP.NET application the transfer rate, i.e. how many KB per second are transferd?
You can set some performance counters on ASP.NET.
See here for some examples.
Some specific ones that may help you figure out what you want are:
Request Bytes Out Total
The total size, in bytes, of responses sent to a client. This does not include standard HTTP response headers.
Requests/Sec
The number of requests executed per second. This represents the current throughput of the application. Under constant load, this number should remain within a certain range, barring other server work (such as garbage collection, cache cleanup thread, external server tools, and so on).
Requests Total
The total number of requests since the service was started.
There are a number of debugging tools you can use to check this at the browser. It will of course vary by page, cache settings, server load, network connection speed, etc.
Check out http://www.fiddlertool.com/fiddler/
Or if you are using Firefox, the FireBug add-in http://addons.mozilla.org/en-US/firefox/addon/1843

Resources