What is the maximum max-age for a Squid backend? - http

What the maximum max-age I can deliver to Squied as a Squid backend, without getting into unsupported?
The original RFC limited max-age to 1 year.
Amazon AWS to 10 years.
Is there a limit for Squid?
The real problem is to preserve objects in the cache forever. So can I set max-age to 100 years or more?

Related

lua-resty-redis set_keepalive recommended settings

I'm using
red:set_keepalive(max_idle_timeout, pool_size)
(From here: https://github.com/openresty/lua-resty-redis#set_keepalive)
with Nginx and trying to determine the best values to use for max_idle_timeout and pool_size.
If my worker_connections is set to 1024, does it make sense to have a pool_size of 1024?
For max_idle_timeout, is 60000 (1 minute) too "aggressive"? Is it safer to go with a smaller value?
Thanks,
Matt
I think Check List for Issues section of official documentation has a good guideline for sizing your connection pool:
Basically if your NGINX handle n concurrent requests and your NGINX has m workers, then the connection pool size should be configured as n/m. For example, if your NGINX usually handles 1000 concurrent requests and you have 10 NGINX workers, then the connection pool size should be 100.
So, if you expect 1024 concurrent requests that actually connect to Redis then a good size for your pool is 1024/worker_processes. Maybe a few more to account for uneven request distribution among workers.
Your keepalive should be long enough to account for the way traffic arives. If your traffic is constant then you can lower your timeout. Or stay with 60 seconds, in most cases longer timeout won't make any noticeable difference.

What is difference between max-age and max-stale in cache control mechanism

I know this is a simple question, and I am sure that no body will mark this as duplicate question, because I have searched all over the SO. so my question is what is the difference between max-age and max-stale in Cache control mechanism of Http, I've read it in here, but I felt its little complex, so if anybody can explain about this ? it would be great help
From RFC 7234:
The "max-age" request directive indicates that the client is
unwilling to accept a response whose age is greater than the
specified number of seconds. Unless the max-stale request directive
is also present, the client is not willing to accept a stale
response.
...
The "max-stale" request directive indicates that the client is
willing to accept a response that has exceeded its freshness
lifetime. If max-stale is assigned a value, then the client is
willing to accept a response that has exceeded its freshness lifetime
by no more than the specified number of seconds.
That is, max-age is the oldest that a response can be, as long as the Cache-Control from the origin server indicates that it is still fresh. max-stale indicates that, even if the response is known to be stale, you will also accept it as long as it's only stale by that number of seconds.
As per Serving Stale Responses:
A cache SHOULD generate a Warning header field with the 110 warn-code
(see Section 5.5.1) in stale responses.
So, if you specified max-stale and received a no-longer-fresh response, the Warning header would let you know.
try this , it explains with example
https://msdn.microsoft.com/en-us/library/27w3sx5e(v=vs.110).aspx

How do HTTP servers handle very large HTTP requests?

As I understand, the body size of HTTP POST request is unlimited. Thus a client may send gigabytes of data in one HTTP request. Now I wonder how an HTTP server should handle such requests. How do Tomcat and Jetty handle them ?
Not true. For example Apache has a default size of 20 MB, configurable in httpd.conf. Then TCP connection will be closed so client cannot send anymore.
Tomcat is configured to limit the post size to 2 MB by default. See http://tomcat.apache.org/tomcat-7.0-doc/config/http.html:
maxPostSize
The maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing. The limit can be disabled by setting this attribute to a value less than or equal to 0. If not specified, this attribute is set to 2097152 (2 megabytes).

HTTP Keep-Alive for <1KB calls every 1 second

I am optimizing my web server settings to handle large number of concurrent users and one of the problems I'm running into is deciding whether or not to disable HTTP Keep-Alive.
I am using CDN for all the images on the site so when my HTML page is requested I am downloading approximately 5 files (js, css, etc) on first load... and then only HTML on each successive load.
Other then that, the only thing I have is HTTP POST update invoke on every second (resulting JSON is typically less than 1KB).
So, with these constrains - would you think that disabling HTTP Keep-Alive on the server would be a good idea? Would that improve the number of concurrent users server can handle?
(By the way, I've reduced KeepAliveTimeout/ConnectionTimeout to 15 seconds in the IIS 7.5 settings)
From what you are describing, you are making a call per client per second. So all boils down to how much it takes to serve the request. If let's say, it takes 100ms to serve the request. So what that means is Http Keep-Alive of 15 seconds will be have 15 calls accommodated w/o re-establishing connection but connection was actual active (or being used) only for 1.5 seconds - rest of the time, you are actually blocking some client/connection (assuming there is any client). W/o keep alive, you can probably accommodate 8-9 times more concurrent clients.
However all said, you have to look at actual parameters to make decision. How many concurrent clients you are likely to have and what is the response time etc. The best way is to do simulation/load testing to measure the performance. Because if your server is going to handle the anticipated max concurrent user load with keep-alive, you can very well keep keep alive.
BTW, also see this related question on SO: http keep-alive in the modern age

Can HTTP POST be limitless?

What is the specification for maximum data size one can send with HTTP POST method?
Quite amazing how all answers talk about IIS, as if that were the only web server that mattered. Even back in 2010 when the question was asked, Apache had between 60% and 70% of the market share. Anyway,
The HTTP protocol does not specify a limit.
The POST method allows sending far more data than the GET method, which is limited by the URL length - about 2KB.
The maximum POST request body size is configured on the HTTP server and typically ranges from
1MB to 2GB
The HTTP client (browser or other user agent) can have its own limitations. Therefore, the maximum POST body request size is min(serverMaximumSize, clientMaximumSize).
Here are the POST body sizes for some of the more popular HTTP servers:
Nginx (largest web server market share as of April 2019) - default 1MB, no practical maximum (2**63)
Apache - maximum 2GB, no default documented
IIS - default 28.6MB for the request length, 2048 bytes for the query string; maximum undocumented
InfluxDB - default ~25MB, maximum undocumented
EDIT (2019) This answer is now pretty redundant but there is another answer with more relevant information.
It rather depends on the web server and web browser:
Internet explorer All versions 2GB-1
Mozilla Firefox All versions 2GB-1
IIS 1-5 2GB-1
IIS 6 4GB-1
Although IIS only support 200KB by default, the metabase needs amending to increase this.
http://www.motobit.com/help/scptutl/pa98.htm
The POST method itself does not have any limit on the size of data.
There is no limit according to the HTTP protocol itself, but implementations will have a practical upper limit. I have sent data exceeding 4 GB using POST to Apache, but some servers did have a limit of 4 GB at the time.
POST allows for an arbitrary length of data to be sent to a server, but there are limitations based on timeouts/bandwidth etc.
I think basically, it's safer to assume that it's not okay to send lots of data.
Different IIS web servers can process different amounts of data in the 'header', according to this (now deleted) article; http://classicasp.aspfaq.com/forms/what-is-the-limit-on-form/post-parameters.html;
Note that there is no limit on the
number of FORM elements you can pass
via POST, but only on the aggregate
size of all name/value pairs. While
GET is limited to as low as 1024
characters, POST data is limited to 2
MB on IIS 4.0, and 128 KB on IIS 5.0.
Each name/value is limited to 1024
characters, as imposed by the SGML
spec. Of course this does not apply to
files uploaded using
enctype='multipart/form-data' ... I
have had no problems uploading files
in the 90 - 100 MB range using IIS
5.0, aside from having to increase the server.scriptTimeout value as well as
my patience!
In an application I was developing I ran into what appeared to be a POST limit of about 2KB. It turned out to be that I was accidentally encoding the parameters into the URL instead of passing them in the body. So if you're running into a problem there, there is definitely a very small limit on the size of POST data you can send encoded into the URL.
HTTP may not have an upper limit, but webservers may have one. In ASP.NET there is a default accept-limit of 4 MB, but you (the developer/webmaster) can change that to be higher or lower.

Resources