Content security policy headers and header size - http

Have any recommendations for a large content-security-policy http header? Some applications cannot handle reading from a large content-security header, due to limitations on header packet size. Yet to list the domains required for a site, specifically, that takes bytes for each domain. Have you observed this limitation of the spec and how did you work around it?

In practice, there were 2 types of restrictions on the size of the HTTP header - server side and client side:
The maximum size of all HTTP response headers for the Apache web server, by default it is 8190 bytes.
If the total size of all HTTP headers (CSP + "HTTP/1.1 200 OK" + Content-type:"text/html; charset=utf-8" + all others) exceeds the allowed limit, the web server returns error 502.
limiting the size of the receiving buffer on some mobile devices. It can be detected by violation reports, the original-policy field is truncated in them. Last observed about 6 years ago.
To fix the problem:
use * to whitelist a set of subdomains (*.google.com).
use img-src * to allow images from any, since XSS through images is unlikely.
use the 'strict-dynamic' token in the script-src directive and remove all host-based sources from it, except http: https:. See strict CSP by Google for details.

Related

What are the options for the gzip_proxied directive for?

The gzip_proxied directive allows for the following options (non-exhaustive):
expired
enables compression if a response header includes the “Expires” field with a value that disables caching;
no-cache
enables compression if a response header includes the “Cache-Control” field with the “no-cache” parameter;
no-store
enables compression if a response header includes the “Cache-Control” field with the “no-store” parameter;
private
enables compression if a response header includes the “Cache-Control” field with the “private” parameter;
no_last_modified
enables compression if a response header does not include the “Last-Modified” field;
no_etag
enables compression if a response header does not include the “ETag” field;
auth
enables compression if a request header includes the “Authorization” field;
I can't see any rational reason to use most of these options. For example, why would whether or not a proxied request contains the Authorization header, or Cache-Control: private, affect whether or not I want to gzip it?
Given that old versions of Nginx strip ETags from responses when gzipping them, I can see a use case for no_etag: if you don't have Nginx configured to generate ETags for your gzipped responses, you may prefer to pass on an uncompressed response with an ETag rather than generate a compressed one without an ETag.
I can't figure out the others, though.
What are the intended use cases of each of these options?
From the admin guide: (emphasis mine)
The directive has a number of parameters specifying which kinds of proxied requests NGINX should compress. For example, it is reasonable to compress responses only to requests that will not be cached on the proxy server. For this purpose the gzip_proxied directive has parameters that instruct NGINX to check the Cache-Control header field in a response and compress the response if the value is no-cache, no-store, or private. In addition, you must include the expired parameter to check the value of the Expires header field. These parameters are set in the following example, along with the auth parameter, which checks for the presence of the Authorization header field (an authorized response is specific to the end user and is not typically cached)
I'd agree that not compressing cacheable responses is reasonable. Consider that the primary savings of caching at a proxy is to increase performance (response time) and reduce the time and bandwidth that the proxy spends in requesting the upstream resource. The tradeoff to gain these performance benefits is the cost of cache storage. Here are some use cases where not compressing cacheable responses make sense:
In the normal web traffic of many sites, non-personalized responses (which constitute the majority of cacheable responses) have already been optimized through techniques like script minification, image size optimization, etc., in a web build process. While these static resources might shrink slightly from compression, the CPU cost of trying to gzip them smaller is probably not an efficient use of the proxy layer machine resources. But dynamically generated pages, served to logged-in users, containing tons of application-generated content would very likely benefit from compression (and would typically not be cacheable).
You are setting up a proxy in front of a costly upstream service, but you are serving responses to another proxy that will be responsible for compression for each user agent. For example, if you have a CDN that makes multiple requests to the same costly upstream resource (from separate geographical edge locations) and you want to ensure that you can reuse the costly response. If the CDN caches uncompressed versions (to service both compressed and uncompressed user agents) you may be compressing at your proxy only to have them uncompress again at the CDN, which is simply a waste of hardware and electricity on both sides, to reduce bandwidth in the highest-bandwidth part of the chain. (Response gzip compression is most beneficial at the last mile, to get the response data to your user's phone which has dropped to one dot of signal as they enter the subway.)
For sizable response entities, requests may come in (from various user agents, but often via downstream CDN intermediaries) for byte ranges of the resource, to user agents that don't support compression. The CDN is likely to serve byte range requests from its own cache, provided that it has an uncompressed version already in its cache.

Is there a way to disable byte-range requests on the client side?

I understand that one can set Accept-Ranges: none on the server to advise the client not to attempt a range request.
I am wondering if there is a way to tell a browser not to attempt a range request without having to make any changes on the server.
For instance, is there a setting in Chrome or Firefox that I can toggle to deter my browser from making range requests?
You answered the question in the first sentence.
The relevant RFC is 7233, Hypertext Transfer Protocol (HTTP/1.1): Range Requests:
2.3. Accept-Ranges
A client MAY generate
range requests without having received this header field for the
resource involved.
A server that does not support any kind of range request for the
target resource MAY send
Accept-Ranges: none
to advise the client not to attempt a range request.
If you mean you want to know how to disable range requests in a browser altogether, consult the specific browser's documentation. A quick web search yielded no options for me to do this for common browsers.

do webservers impose character limits on get requests [duplicate]

What's the maximum length of an HTTP GET request?
Is there a response error defined that the server can/should return if it receives a GET request that exceeds this length?
This is in the context of a web service API, although it's interesting to see the browser limits as well.
The limit is dependent on both the server and the client used (and if applicable, also the proxy the server or the client is using).
Most web servers have a limit of 8192 bytes (8 KB), which is usually configurable somewhere in the server configuration. As to the client side matter, the HTTP 1.1 specification even warns about this. Here's an extract of chapter 3.2.1:
Note: Servers ought to be cautious about depending on URI lengths above 255 bytes, because some older client or proxy implementations might not properly support these lengths.
The limit in Internet Explorer and Safari is about 2 KB, in Opera about 4 KB and in Firefox about 8 KB. We may thus assume that 8 KB is the maximum possible length and that 2 KB is a more affordable length to rely on at the server side and that 255 bytes is the safest length to assume that the entire URL will come in.
If the limit is exceeded in either the browser or the server, most will just truncate the characters outside the limit without any warning. Some servers however may send an HTTP 414 error.
If you need to send large data, then better use POST instead of GET. Its limit is much higher, but more dependent on the server used than the client. Usually up to around 2 GB is allowed by the average web server.
This is also configurable somewhere in the server settings. The average server will display a server-specific error/exception when the POST limit is exceeded, usually as an HTTP 500 error.
You are asking two separate questions here:
What's the maximum length of an HTTP GET request?
As already mentioned, HTTP itself doesn't impose any hard-coded limit on request length; but browsers have limits ranging on the 2 KB - 8 KB (255 bytes if we count very old browsers).
Is there a response error defined that the server can/should return if it receives a GET request exceeds this length?
That's the one nobody has answered.
HTTP 1.1 defines status code 414 Request-URI Too Long for the cases where a server-defined limit is reached. You can see further details on RFC 2616.
For the case of client-defined limits, there isn't any sense on the server returning something, because the server won't receive the request at all.
Browser limits are:
Browser Address bar document.location
or anchor tag
---------------------------------------------------
Chrome 32779 >64k
Android 8192 >64k
Firefox >64k >64k
Safari >64k >64k
Internet Explorer 11 2047 5120
Edge 16 2047 10240
Want more? See this question on Stack Overflow.
A similar question is here: Is there a limit to the length of a GET request?
I've hit the limit and on my shared hosting account, but the browser returned a blank page before it got to the server I think.
Technically, I have seen HTTP GET will have issues if the URL length goes beyond 2000 characters. In that case, it's better to use HTTP POST or split the URL.
As already mentioned, HTTP itself doesn't impose any hard-coded limit on request length; but browsers have limits ranging on the 2048 character allowed in the GET method.
Yes. There isn't any limit on a GET request.
I am able to send ~4000 characters as part of the query string using both the Chrome browser and curl command.
I am using Tomcat 8.x server which has returned the expected 200 OK response.
Here is the screenshot of a Google Chrome HTTP request (hiding the endpoint I tried due to security reasons):
RESPONSE

Is there a practical HTTP Header length limit?

I have a web application that adds contextual information to XmlHttpRequest objects using the setRequestHeader API. I am using a custom header name (e.g. X-Foo) and a JSON structured value. It isn't part of the URL QueryString or POST body because it is meta information about the request.
Is there a practical size limit to the header value? If my JSON gets truncated, it becomes unparseable. I am most concerned with limits in Apache 2, Tomcat 6 and IIS 7. I did a Google search for http header length limit, but many of the results seem dated. There are some relevant comments in How big can a user agent string get? but not as specific as I would like.
Edit:
I just ran across this similar question - Maximum on http header values?
Although each web server software has some limitations, there is a difference whether there’s a limit for the HTTP request line plus header fields or for each header field.
Here’s a summary:
Apache 1.3, 2.0, 2.2, 2.3: 8190 Bytes (for each header field)
IIS:
4.0: 2097152 Bytes (for the request line plus header fields)
5.0: 131072 Bytes, 16384 Bytes with Windows 2000 Service Pack 4 (for the request line plus header fields)
6.0: 16384 Bytes (for each header fields)
Tomcat:
5.5.x/6.0.x: 49152 Bytes (for the request line plus header fields)
7.0.x: 8190 Bytes (for the request line plus header fields)
So to conclude: To be accepted by all web servers above, a request’s request line plus header fields should not exceed 8190 Bytes. This is also the limit for each header fields (effectively even less).
Yes, but the limits are configurable and dependent on platform. For example, Tomcat has a default limit of 8K. I believe that IIS 6, not sure about IIS 7, has a limit of 16K. I ran into this when using integrated windows authentication for several web sites. Turns out my security token was too large when encoded into the header. Fortunately, these are configurable. Registry settings for IIS can be found at http://support.microsoft.com/kb/820129. I believe the key settings to change are MaxFieldLength (per header size) and MaxRequestBytes (total size of request).
For Apache, I found this Server Limits for Apache Security article that lists these directives:
# allow up to 100 headers in a request
LimitRequestFields 100
# each header may be up to 8190 bytes long
LimitRequestFieldsize 8190
For Nginx, the large_client_header_buffers directive from HttpCoreModule controls this:
The longest header line of request also must be not more than the size
of one buffer, otherwise the client get the error "Bad request" (400).
By default the size of one buffer is equal to the size of page,
depending on platform this either 4K or 8K
While you can configure the server, it's unlikely that you really can configure the whole way through firewalls, load balancers and proxies. Keeping the header size small keeps problems away.
The Flash Media Server 4.5 has a very short default header length limit which can cause the server to simply not respond, particularly in circumstances where there is a moderate cookie load.
See: Flash Media Server 4.5 Configuration and Administration: Configuring the server
Configuring Apache HTTP Server: Specify the maximum HTTP header line length
In the Flash Media Server Adaptor.xml file, the MaxHeaderLineLength
element determines the size of the HTTP header the server can handle.
The default value for MaxHeaderLineLength is 1024 bytes. Some browsers
send a header larger than 1024 bytes. In this scenario, Apache sends
back an empty response. To fix this issue, configure
MaxHeaderLineLength to 8192.
Note: By default, the Apache HTTP header size limit is 8 KB (8190 bytes plus a carriage return).
Putting this here in case the header size limit on Flash Media Server bites someone else.

Maximum on HTTP header values?

Is there an accepted maximum allowed size for HTTP headers? If so, what is it? If not, is this something that's server specific or is the accepted standard to allow headers of any size?
No, HTTP does not define any limit. However most web servers do limit size of headers they accept. For example in Apache default limit is 8KB, in IIS it's 16K. Server will return 413 Entity Too Large error if headers size exceeds that limit.
Related question: How big can a user agent string get?
As vartec says above, the HTTP spec does not define a limit, however many servers do by default. This means, practically speaking, the lower limit is 8K. For most servers, this limit applies to the sum of the request line and ALL header fields (so keep your cookies short).
Apache 2.0, 2.2: 8K
nginx: 4K - 8K
IIS: varies by version, 8K - 16K
Tomcat: varies by version, 8K - 48K (?!)
It's worth noting that nginx uses the system page size by default, which is 4K on most systems. You can check with this tiny program:
pagesize.c:
#include <unistd.h>
#include <stdio.h>
int main() {
int pageSize = getpagesize();
printf("Page size on your system = %i bytes\n", pageSize);
return 0;
}
Compile with gcc -o pagesize pagesize.c then run ./pagesize. My ubuntu server from Linode dutifully informs me the answer is 4k.
Here is the limit of most popular web server
Apache - 8K
Nginx - 4K-8K
IIS - 8K-16K
Tomcat - 8K – 48K
Node (<13) - 8K; (>13) - 16K
HTTP does not place a predefined limit on the length of each header
field or on the length of the header section as a whole, as described
in Section 2.5. Various ad hoc limitations on individual header
field length are found in practice, often depending on the specific
field semantics.
HTTP Header values are restricted by server implementations. Http specification doesn't restrict header size.
A server that receives a request header field, or set of fields,
larger than it wishes to process MUST respond with an appropriate 4xx
(Client Error) status code. Ignoring such header fields would
increase the server's vulnerability to request smuggling attacks
(Section 9.5).
Most servers will return 413 Entity Too Large or appropriate 4xx error when this happens.
A client MAY discard or truncate received header fields that are
larger than the client wishes to process if the field semantics are
such that the dropped value(s) can be safely ignored without changing
the message framing or response semantics.
Uncapped HTTP header size keeps the server exposed to attacks and can bring down its capacity to serve organic traffic.
Source
RFC 6265 dated 2011 prescribes specific limits on cookies.
https://www.rfc-editor.org/rfc/rfc6265
6.1. Limits
Practical user agent implementations have limits on the number and
size of cookies that they can store. General-use user agents SHOULD
provide each of the following minimum capabilities:
o At least 4096 bytes per cookie (as measured by the sum of the
length of the cookie's name, value, and attributes).
o At least 50 cookies per domain.
o At least 3000 cookies total.
Servers SHOULD use as few and as small cookies as possible to avoid
reaching these implementation limits and to minimize network
bandwidth due to the Cookie header being included in every request.
Servers SHOULD gracefully degrade if the user agent fails to return
one or more cookies in the Cookie header because the user agent might
evict any cookie at any time on orders from the user.
--
The intended audience of the RFC is what must be supported by a user-agent or a server. It appears that to tune your server to support what the browser allows you would need to configure 4096*50 as the limit. As the text that follows suggests, this does appear to be far in excess of what is needed for the typical web application. It would be useful to use the current limit and the RFC outlined upper limit and compare the memory and IO consequences of the higher configuration.
I also found that in some cases the reason for 502/400 in case of many headers could be because of a large number of headers without regard to size.
from the docs
tune.http.maxhdr
Sets the maximum number of headers in a request. When a request comes with a
number of headers greater than this value (including the first line), it is
rejected with a "400 Bad Request" status code. Similarly, too large responses
are blocked with "502 Bad Gateway". The default value is 101, which is enough
for all usages, considering that the widely deployed Apache server uses the
same limit. It can be useful to push this limit further to temporarily allow
a buggy application to work by the time it gets fixed. Keep in mind that each
new header consumes 32bits of memory for each session, so don't push this
limit too high.
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr
If you are going to use any DDOS provider like Akamai, they have a maximum limitation of 8k in the response header size. So essentially try to limit your response header size below 8k.

Resources