Caching reverse proxy for dynamic content - http

I was thinking about asking on Software Recommendations, but then I've found out that it may be a too strange request and it needs some clarification first.
My points are:
Each response contains an etag
which is a hash of the content
and which is globally unique (with sufficient probability)
The content is (mostly) dynamic and may change anytime (expires and max-age headers are useless here).
The content is partly user-dependent, as given by the permissions (which itself change sometimes).
Basically, the proxy should contain a cache mapping the etag to the response content. The etag gets obtained from the server and in the most common case, the server does not deal with the response content at all.
It should go like follows: The proxy always sends a request to the server and then either
1 the server returns only the etag and the proxy makes a lookup based on it and
1.1 on cache hit,
it reads the response data from cache
and sends a response to the client
1.2 on cache miss,
it asks the server again and then
the server returns the response with content and etag,
the proxy stores it in its cache
and sends a response to the client
2 or the server returns the response with content and etag,
the proxy stores the data in its cache
and sends a response to the client
For simplicity, I left out the handling of the if-none-match header, which is rather obvious.
My reason for this is that the most common case 1.1 can be implemented very efficiently in the server (using its cache mapping requests to etags; the content isn't cached in the server), so that most requests can be handled without the server dealing with the response content. This should be better than first getting the content from a side cache and then serving it.
In case 1.2, there are two requests to the server, which sounds bad, but is no worse than the server asking a side cache and getting a miss.
Q1: I wonder, how to map the first request to HTTP. In case 1, it's like a HEAD request. In case 2, it's like GET. The decision between the two is up to the server: If it can serve the etag without computing the content, then it's case 1, otherwise, it's case 2.
Q2: Is there a reverse proxy doing something like this? I've read about nginx, HAProxy and Varnish and it doesn't seem to be the case. This leads me to Q3: Is this a bad idea? Why?
Q4: If not, then which existing proxy is easiest to adapt?
An Example
A GET request like /catalog/123/item/456 from user U1 was served with some content C1 and etag: 777777. The proxy stored C1 under the key 777777.
Now the same request comes from user U2. The proxy forwards it, the server returns just etag: 777777 and the proxy is lucky, finds C1 in its cache (case 1.1) and sends it to U2. In this example, neither the clients not the proxy knew the expected result.
The interesting part is how could the server know the etag without computing the answer. For example, it can have a rule stating that requests of this form return the same result for all users, assuming that the given user is allowed to see it. So when the request from U1 came, it computed C1 and stored the etag under the key /catalog/123/item/456. When the same request came from U2, it just verified that U2 is permitted to see the result.

Q1: It is a GET request. The server can answer with an "304 not modified" without body.
Q2: openresty (nginx with some additional modules) can do it, but you will need to implement some logic yourself (see more detailed description below).
Q3: This sounds like a reasonable idea given the information in your question. Just some food for thought:
You could also split the page in user-specific and generic parts which can be cached independently.
You shouldn't expect the cache to keep the calculated responses forever. So, if the server returns a 304 not modified with etag: 777777 (as per your example), but the cache doesn't know about it, you should have an option to force re-building the answer, e.g. with another request with a custom header X-Force-Recalculate: true.
Not exactly part of your question, but: Make sure to set a proper Vary header to prevent caching issues.
If this is only about permissions, you could maybe also work with permission infos in a signed cookie. The cache could derive the permission from the cookie without asking the server, and the cookie is tamper proof due to the signature.
Q4: I would use openresty for this, specifically the lua-resty-redis module. Put the cached content into a redis key-value-store with the etag as key. You'd need to code the lookup logic in Lua, but it shouldn't be more than a couple of lines.

Related

Is server code executed when server returns http status code 304

Does/should server code run when the server return status code 304?
I understand that the server should not return anything (client should use the cache), but I cant find any info on whether the server will executed the code in an api endpoint for example.
RFC 2616 section 10.3.5 describes the 304 Not Modified response:
If the client has performed a conditional GET request and access is
allowed, but the document has not been modified, the server SHOULD
respond with this status code. The 304 response MUST NOT contain a message-body
The server will send Date, ETag and/or Content-Location (200 only), or Expires, Cache-Control, and/or Vary if the respond may vary.
What is a Conditional GET?
An cliet application, browser, or proxy with a retained Last-Modified or Etag value will issue a Conditional GET as an initial header only. This allows the client to determine if the resource has been updated.
How does the client know if the resource changed?
Well it depends on how the server is configured.
The Origin Server May:
Ignore caching, serve every request new.
Similar to ignoring, you may develop the application to change query strings. This prevents caching at Proxy Servers and invalidates the client cache.
If configured to do so, issue a Last-Modified or Etag value. Often done for static content. Proxy Servers and Client Caches use to invalidate their version.
A Web application could issue a Last-Modified far
into the future then change the URL to invalidate stale content. This requires the application be developed with this feature in mind.
Resources may also be issued version numbers. This allows them to preserve Proxy Caches but invalidate Client Caches.
Does server code run when the server return status code 304?
Almost Never. Disregarding that it is technically possible for an incorrectly configured application to respond with a 304 Not Modified code instead of a 200 code.
With a ETag and/or Content-Location value, a server (nginx for example) can confirm nothing has changed without issued a call to the application. This also neatly handles resources with version numbers the same way.
For query strings (image.jpg?version=12), the client cache will invalidate the content. A Proxy Server will also invalidate, and the query will be requested fresh.
I understand that the server should not return anything (client should use the cache), but I cant find any info on whether the server will executed the code in an api endpoint for example.
I'm a fan of nginx, here is a good resource on how caching applies to it.
In short, as much as you can do to support various caches between your client and the application the more requests you can support per day.

Nginx: allow only certain cookies in http response

I'm using playframework and nginx. playframework may add following cookies to http response: PLAY_SESSION, PLAY_FLASH, PLAY_LANG.
I want to make sure that only above cookies (PLAY_*) are allowed in nginx level. If there are other cookies (let's say they're added accidentally) they should be removed by nginx.
How can I allow only predefined cookies in http response in nginx?
PS: If it's not possible to solve this issue in nginx, I need to fix by using playframework.
How cookies work?
First, let's establish what's cookies — they're little pieces of "sticky" hidden information that lets you keep state on your web-site for a given User-Agent. These cookies are often used for tracking users, keeping session and storing minor preference information for the site.
Set-Cookie HTTP response header (from server to client)
Cookies can be set by the server through the Set-Cookie response header (with a separate header for each cookie), or, after the page has already been transferred from the server to the client, through JavaScript.
Note that setting cookies is a pretty complex job — they have expiration dates, http/https settings, path etc — hence the apparent necessity to use a separate Set-Cookie header for each cookie.
This requirement to have a separate header is usually not an issue, since cookies aren't supposed to be modified all that often, as they usually store very minimal information, like a session identifier, with the heavy-duty information being stored in an associated database on the server.
Cookie HTTP request header (from client to server)
Regardless how they were first set, cookies would then included in eligible subsequent requests to the server by the client, using the Cookie request header, with a whole list of eligible cookies in one header.
Note that, as such, these cookies that are sent by the client back to the server is a simple list of name and attribute pairs, without any extra information about the underlying cookies that store these attributes on the client side (e.g., the expiration dates, http/https setting and paths are saved by the client internally, but without being revealed in subsequent requests to the server).
This conciseness of the Cookie request header field is important, because, once set, eligible cookies will be subsequently included in all forthcoming requests for all resources with the eligible scheme / domain / path combination.
Caching issues with cookies.
The normal issue of using cookies, especially in the context of acceleration and nginx, is that:
cookies invalidate the cache by default (e.g., unless you use proxy_ignore_headers Set-Cookie;),
or, if you do sloppy configuration, cookies could possibly spoil your cache
e.g., through the client being able to pass cookies to the upstream in the absence of proxy_set_header Cookie "";,
or, through the server insisting on setting a cookie through the absence of proxy_hide_header Set-Cookie;.
How nginx handles cookies?
Cookie from the client
Note that nginx does support looking through the cookies that the client sends to it (in the Cookie request header) through the $cookie_name scheme.
If you want to limit the client to only be sending certain cookies, you could easily re-construct the Cookie header based on these variables, and send only whichever ones you want to the upstream (using proxy_set_header as above).
Or, you could even make decisions based on the cookie to decide which upstream to send the request to, or to have a per-user/per-session proxy_cache_key, or make access control decisions based on the cookies.
Set-Cookie from the backend
As for the upstream sending back the cookies, you can, of course, decide to block it all to considerably improve the caching characteristics (if applicable to your application, or parts thereof), or fix up the domain and/or path with proxy_cookie_domain and/or proxy_cookie_path, respectively.
Otherwise, it's generally too late to make any other routing decision — the request has already been processed by the selected upstream server, and the response is ready to be served — so, naturally, there doesn't seem to be a way to look into these individual Set-Cookie cookies through normal means in nginx (unless you want to go third-party modules, or lua, or perl), since it'd already be too late to make any important routing decisions for a completed request.
Basically, these Set-Cookie cookies have more to do with the content than with the way it is served or routed, so, it doesn't seem appropriate to have integrated functionality to look into them through nginx.
(If you do need to make routing decisions after the completion of the request, then nginx does support X-Accel-Redirect, as well as some other special headers.)
If your issue is security, then, as I've pointed out above, the upstream developer can already use JavaScript to set ANY extra cookies however they want, so, effectively, trying to use nginx to limit some, but not all, Set-Cookie responses from the server is kind of a pointless endeavour in the real world (as there is hardly any difference between the cookies set through JavaScript compared to Set-Cookie).
In summary:
you can easily examine and reconstruct the Cookie header sent by the client to the server before passing it over to the backend, and only include the sanctioned cookies in the request to upstream backend,
but, unless you want to use lua/perl, or have your own nginx module (as well as possibly quarantine the JavaScript from the pages you serve), then you cannot pass only certain Set-Cookie headers back from the upstream backend to the client with a stock nginx.conf — with the Set-Cookie headers, it's an all-or-nothing situation, and there doesn't seem to be a good-enough use-case for a distinct approach.
For an Nginx solution it might be worth asking over at serverfault. Here is a potential solution via Play Framework.
package filters
import javax.inject._
import play.api.mvc._
import scala.concurrent.ExecutionContext
#Singleton
class ExampleFilter #Inject()(implicit ec: ExecutionContext) extends EssentialFilter {
override def apply(next: EssentialAction) = EssentialAction { request =>
next(request).map { result =>
val cookieWhitelist = List("PLAY_SESSION", "PLAY_FLASH", "PLAY_LANG")
val allCookies = result.newCookies.map(c => DiscardingCookie(c.name))
val onlyWhitelistedCookies = result.newCookies.filter(c => cookieWhitelist.contains(c.name))
result.discardingCookies(allCookies: _*).withCookies(onlyWhitelistedCookies: _*)
}
}
}
This solution utilizes Filters and Result manipulation. Do test for adverse effects on performance.

Checking if HTTP resource has changed after maximum cache time has expired

I'm trying to work out a new caching policy for the static resources on a website. A common problem is whenever javascript, CSS etc. is updated, many users hold onto stale versions because currently there are no caching specific HTTP headers included in the file responses.
This becomes a serious problem when, for example, the javascript updates are linked to server-side updates, and the stale javascript chokes on the new server responses.
Eliminating browser caching completely with a cache-control: max-age=0, no-cache seems like overkill, since I'd still like to take some pressure off the server by letting browsers cache temporarily. So, setting the cache policy to a maximum of one hour seems alright, like cache-control: max-age=3600, no-cache.
My understanding is that this will always fetch a new copy of the resource if the cached copy is older than one hour. I'd specifically like to know if it's possible to set a HTTP header or combination of headers that will instruct browsers to only fetch a new copy if the resource was last checked more than one hour ago AND if the resource has changed.
I'm just trying to avoid browsers blindly fetching new copies just because the cached resource is older than one hour, so I'd also like to add the condition that the resource has been changed.
Just to illustrate further what I'm asking:
New user arrives at site and gets fresh copy of script.js
User stays on site for 45 mins, browser uses cached copy of script.js all the time
User comes back to site 2 hours later, and browser asks the server if script.js has changed
If it has, then it gets a fresh copy and the process repeats
If it has not changed, then it uses the cached copy for the next hour, after which it will check again
Have I misunderstood things? Is what I'm asking how it actually works, or do I have to do something different?
Have I misunderstood things? Is what I'm asking how it actually works,
or do I have to do something different?
You have some serious misconceptions about what the various cache control directives do and why cache behaves as it does.
Eliminating browser caching completely with a cache-control:
max-age=0, no-cache seems like overkill, since I'd still like to take
some pressure off the server by letting browsers cache temporarily ...
The no-cache option is wrong too. Including it means the browser will always
check with the server for modifications to the file every time.
That isn't what the no-cache means or what it is intended for - it means that a client MUST NOT used a cached copy to satisfy a subsequent request without successful revalidation - it does not and has never meant "do not cache" - that is what the no-store directive is for
Also the max-age directive is just the primary means for caches to calculate the freshness lifetime and expiration time of cached entries. The Expires header (minus the value of the Date header can also be used) - as can a heuristic based on the current UTC time and any Last-Modified header value.
Really if your goal is to retain the cached copy of a resource for as long as it is meaningful - whilst minimising requests and responses you have a number of options.
The Etag (Entity Tag) header - this is supplied by the server in response to a request in either a "strong" or "weak" form. It is usually a hash based on the resource in question. When a client re-requests a resource it can pass the stored value of the Etag with the If-None-Match request header. If the resource has not changed then the server will respond with 304 Not Modified.
You can think Etags as fingerprints for resources. They can be used to massively reduce the amount of information sent over the wire - as only fresh data is served - but they do not have any bearing on the number of times or frequency of requests.
The last-modified header - this is supplied by the server in response to a request in HTTPdate format - it tells the client the last time the resource was modified.
When a client re-requests a resource it can pass the stored value of the last-modified header with the If-Modified-Since request header. If the resource has not changed since the time it was last modified then the server will respond with 304 Not Modified.
You can think of last modified as a weaker form of entity checking than Etags. It addresses the same problem (bandwidth/redundancy) it in a less robust way and again it has no bearing at all on the actual number of requests made.
Revving - a technique that use a combination of the Expires header and the name (URN) of a resource. (see stevesouders blog post)
Here one basically sets a far forward Expires header - say 5 years from now - to ensure the static resource is cached for a long time.
You then have have two options for updating - either by appending a versioning query string to the requests URL - e.g. "/mystyles.css?v=1.1" - and updating the version number as and when the resource changes. Or better - versioning the file name itself e.g. "/mystyles.v1.1.css" so that each version is cached for as long as possible.
This way not only do you reduce the amount of bandwidth - you will as eliminate all checks to see if the resource has changed until you rename it.
I suppose the main point here is none of the catch control directives you mention max-age, public, etc have any bearing at all on if a 304 response is generated or not. For that use either Etag / If-None-Match or last-modified / If-Modified-Since or a combination of them (with If-Modified-Since as a fallback mechanism to If-None-Match).
It seems that I have misunderstood how it works, because some testing in Chrome has revealed exactly the behavior that I was looking for in the 5 steps I mentioned.
It doesn't blindly grab a fresh copy from the server when the max-age has expired. It does a GET, and if the response is 304 (Not Modified), it continues using the cached copy until the next hour has expired, at which point it checks for changes again etc.
The no-cache option is wrong too. Including it means the browser will always check with the server for modifications to the file every time. So what I was really looking for is:
Cache-Control: public, max-age=3600

How do I deal with different requests that map to the same response?

I'm designing a Web service. The request is idempotent, so I chose the GET method. The response is relatively expensive to calculate and not small, so I want to get caching (on the protocol level) right. (Don't worry about memoisation at my part, I have that already covered; my question here is actually also paying attention to the Web as a whole.)
There's only one mandatory parameter and a number of optional parameter with default values if missing. For example, the following two map to the same representation of the response. (If this is a dumb way to go about it the interface, propose something better.)
GET /service?mandatory_parameter=some_data HTTP/1.1
GET /service?mandatory_parameter=some_data;optional_parameter=default1;another_optional_parameter=default2;yet_another_optional_parameter=default3 HTTP/1.1
However, I imagine clients do not know this and would treat them separate and therefore waste cache storage. What should I do to avoid violating the golden rule of caching?
Make up a canonical form, document it (e.g. all parameters are required after all and need to be sorted in a specific order) and return a client error unless the required form is met?
Instead of an error, redirect permanently to the canonical form of a request?
Or is it enough to not mind how the request looks like, and just respond with the same ETag for same responses?
First, don't use semicolons as a delimiter in a query string. You should be using ? to begin a query string and & to delimit variable/value pairs. RFC 3986 doesn't explicitly say you have to use &, but the vast majority of existing code uses this delimiter because of the application/x-www-form-urlencoded precedent.
Second, you're right, in that parameters in a query string result in a different URI, and thus, as far as caches are concerned, a different resource. Assuming you want optimal caching performance, if you know that an optional parameter has been specified, and its inclusion is unnecessary and does not affect the representation that will be transmitted, you should be making a redirect to a canonical representation that omits the parameter. (i.e., An optional parameter is given with a value that is set to the default value. For example, if you have http://example.com:80/, you can normalize to http://example.com/ because 80 is the default value for the port with HTTP. You can do the same for query parameters since you control the URI space.) If you have parameters included (optional or otherwise) that appear in an order other than the canonical order, you should redirect for that too. A 301 redirect would be preferred if you know that the relationship between URIs will be stable. Otherwise, do a 302/307 redirect as appropriate. I would recommend defining your canonical form the same way that OAuth does: Sort each parameter alphabetically, first by key, then by value. Other normalization operations will also help out here. RFC 3986 has an entire section on URI normalization that will be relevant to you. This technique will really only work for GET, and redirects on PUT/POST/DELETE are not generally recommended.
Third, ETags are great, and they provide a huge performance improvement if implemented well by both the client and server. However, it's unfortunately rare for both sides to do it right. Ditto for Last-Modified. You should pursue these, because the CPU and bandwidth savings are significant when it works, but they are not sufficient on their own. Other headers like Cache-Control are also frequently necessary. It's worth familiarizing yourself with Section 13 of RFC 2616 if you're planning on going into great detail on this stuff.
Finally, a word of warning — there is an issue with these redirects you need to be aware of: Clients trying to access your resources may frequently be redirected to other locations. This introduces overhead that only gives you an overall savings if the clients make subsequent requests against the same resource, maintaining state to avoid the subsequent redirect. Unless you've open-sourced a reference client implementation that takes advantage of your caching optimizations, you may never benefit from these tweaks.
I would pick option (2) in your list - I would make the request RESTful, rather than RPC like.
I.e. in this case, if you make all of the parameters parts of the request path:
/service/mandatory_parameter/some_data/optional_parameter/default1/another_optional_parameter/default2/yet_another_optional_parameter/default3
In the case where not all of the optional parameters are specified, return a 301 (Permanent redirect) to the full resource name with the defaults filled in. This will (or should) be cached by clients and web caches appropriately, and even if it gets to your backend then making the 301 should be very cheap for you.
At which point, you have one canonical form for the URI, and caching will work as normal/expected.
This does mean that every combination of parameters will be cached separately (as a 301), however that's fine really as the non-canonical requests will have an independent cache policy to the full request and clients which are worried about the extra round trip can fill in all the parameters themselves.
Your option (3) won't work as you expect - each form will be cached independently as they're different URIs.
It should also be noted that a lot of downstream caches / software won't cache your response at all due to the query parameters, which is why I suggest turning it into a 'proper' resource..
First it's a good thing you choice GET since other methods don't have as good caching support. As far as I know browsers do cache URIs with respect to the parameters so I don't think It's a good idea to use a canonical form.
One thing that you don't state here is how this service is going to be used. If those requests are made from a browser (and it looks to me that those are probably issued from a script) requests will probably look the same even if they are asked for more than once. So make sure that whatever generate the URI end up with the same URI for equal input data (remove default parameters or always include them).
When it comes to the ETag I recommend you to have this, though I would like to clarify how it works; You get the request, you process all your "expensive calculations" and then if there were a If-None-Match header with the same hash (ETag) as your processed response you may return 304 Not-Modified. So ETag is used to avoid transmitting the response if the client already have it. (Sure you may implement caching on server-side, but this is better to do based on input parameters).
To further improve cache hits on client side you may want to set proper caching headers in you response.
I asked almost the same question for me some month ago. My answer I describe on an example of my realization.
On the server side I have WFC service which receive requests in one of the following forms
GET /Service/RequestedData?param1=data1&param2=data2…
GET /Service/RequestedData/IdOfData?param1=data1&param2=data2…
PUT /Service/RequestedData/IdOfData // with param1=data1&param2=data2… in body
POST /Service/RequestedData/IdOfData // with param1=data1&param2=data2… in body
DELETE /Service/RequestedData/IdOfData
So requests are in REST for, but GET requests have some optional parameters. Especially this part is a port of your interest.
Because WFC support a URL templates, the prototype of functions which reply to a client request looks like
[WebGet (UriTemplate = "RequestedData?param1={myParam1}&param2={myParam2}",
ResponseFormat = WebMessageFormat.Json)]
[OperationContract]
MyResult GetData (string myParam1, int myParam2);
All requests like
GET /Service/RequestedData?param1=&param2=data2
GET /Service/RequestedData?param2=data2&param1=
GET /Service/RequestedData?param2=data2
will be mapped to the same call from the side of my WCF service. So I have one problem less.
Now at the beginning of implementation of every method which response to HTTP GET request I set in the HTTP header "Cache-Control: max-age=0". It means that client always try to verify client browser cache and no ajax requests will be not easy responded from the local cache like it can do Internet Explorer.
Next I calculate always an ETag based on my data. The exact algorithm is a subject of separate discussion, but important is, that in all responses to HTTP GET requests exist ETag in the HTTP header.
So clients every time verify his local cache and send GET request to server. They send the ETag, which come from its local cache, inside of "If-None-Match" HTTP header. Server computes the ETag which has data, which will be sending back to this GET request. It ETag of data is the same as in the client request server send back response with empty body and the code "304 Not Modified" back. In this case browser gives data from the local cache.
If the same client from a unknown reason create a new version of URL request, which will be interpret from the web browser as a new URL, then web browser will not find old server response in the local cache and send one more time the same request to the server. Is it a real problem? The server send the data one more time. If you have a server side caching you can makes a little more optimization. In the most cases, the URL of GET requests will be produced by a client side JavaScript so you will be no time have such situation.
Calculation of ETag and setting of "Cache-Control: max-age=0" and Etag header as well as setting "304 Not Modified" code should do WFC service, but it is very easy.
The most important is that my implementation of ETag calculation is not as expansive as getting the whole data from the database server and calculation MD5 cache from there. I use permanently rowversion data type in every row of data in the SQL Server database. This rowversion is nothing other as a counter of changes in the database. If one change a row of data rowversion value in the corresponding row will be incremented. So if one makes SELECT statement from maximum value of rowversion value, and this value is not changed comparing with the previous requests, one can be sure that the data were not changed in the time period. The algorithm of calculation of ETag should be only sensitive to deleting of data from the table. But it is also a solved problem. A little more about this you can read in Concurrency handling of Sql transactrion.
I don’t want suggest my ETag calculation as a best choice, I want only say, that calculation of ETag can be much cheaper as calculation MD5 from the whole data.
In case of errors Server throws an exception which will be mapped to a HTTP code, which I define in the throw statement. As a body WFC sends a standard JSON object {"description":"My error text"}. A custom error object is also possible (see Is WebProtocolException included in .net 4.0?). On the client side I use jQuery and in the corresponding jQuery.ajax inside of error event handler the error message will be decoded and displayed to the user.
So my recommendation: usage of ETag together with "Cache-Control: max-age=0" for all HTTP GET requests. For all other requests I’ll recommend you implement RESTfull service. For the error implementation you should look at the most native way which is supported by the software used for server and client implementation and use this.
UPDATED: To clear the URL structure I should add following. In my service the main part like GET /Service/RequestedData/IdOfData describes data objects requested. Parameters param1=data1&param2=data2 corresponds mostly the information about sorting, paging and filtering of data. I use active jqGrid plugin for jQuery and if the end-user scroll in the grid to the next page, click on the column header (sorting of data) or if he set a filter with respect of searching feature, all these follows to different optional parameters appended the main URL.

ETag vs Header Expires

I've looked around but haven't been able to figure out if I should use both an ETag and an Expires Header or one or the other.
What I'm trying to do is make sure that my flash files (and other images and what not only get updated when there is a change to those files.
I don't want to do anything special like changing the filename or putting some weird chars on the end of the url to make it not get cached.
Also, is there anything I need to do programatically on my end in my PHP scripts to support this or is it all Apache?
They are slightly different - the ETag does not have any information that the client can use to determine whether or not to make a request for that file again in the future. If ETag is all it has, it will always have to make a request. However, when the server reads the ETag from the client request, the server can then determine whether to send the file (HTTP 200) or tell the client to just use their local copy (HTTP 304). An ETag is basically just a checksum for a file that semantically changes when the content of the file changes.
The Expires header is used by the client (and proxies/caches) to determine whether or not it even needs to make a request to the server at all. The closer you are to the Expires date, the more likely it is the client (or proxy) will make an HTTP request for that file from the server.
So really what you want to do is use BOTH headers - set the Expires header to a reasonable value based on how often the content changes. Then configure ETags to be sent so that when clients DO send a request to the server, it can more easily determine whether or not to send the file back.
One last note about ETag - if you are using a load-balanced server setup with multiple machines running Apache you will probably want to turn off ETag generation. This is because inodes are used as part of the ETag hash algorithm which will be different between the servers. You can configure Apache to not use inodes as part of the calculation but then you'd want to make sure the timestamps on the files are exactly the same, to ensure the same ETag gets generated for all servers.
Etag and Last-modified headers are validators.
They help the browser and/or the cache (reverse proxy) to understand if a file/page, has changed, even if it preserves the same name.
Expires and Cache-control are giving refresh information.
This means that they inform, the browser and the reverse in-between proxies, up to what time or for how long, they may keep the page/file at their cache.
So the question usually is which one validator to use, etag or last-modified, and which refresh infomation header to use, expires or cache-control.
Expires and Cache-Control are "strong caching headers"
Last-Modified and ETag are "weak caching headers"
First the browser check Expires/Cache-Control to determine whether or not to make a request to the server
If have to make a request, it will send Last-Modified/ETag in the HTTP request. If the Etag value of the document matches that, the server will send a 304 code instead of 200, and no content. The browser will load the contents from its cache.
Another summary:
You need to use both. ETags are a "server side" information. Expires are a "Client side" caching.
Use ETags except if you have a load-balanced server. They are safe and will let clients know they should get new versions of your server files every time you change something on your side.
Expires must be used with caution, as if you set a expiration date far in the future but want to change one of the files immediatelly (a JS file for instance), some users may not get the modified version until a long time!
One additional thing I would like to mention that some of the answers may have missed is the downside to having both ETags and Expires/Cache-control in your headers.
Depending on your needs it may just add extra bytes in your headers which may increase packets which means more TCP overhead. Again, you should see if the overhead of having both things in your headers is necessary or will it just add extra weight in your requests which reduces performance.
You can read more about it on this excellent blog post by Kyle Simpson: http://calendar.perfplanet.com/2010/bloated-request-response-headers/
In my view, With Expire Header, server can tell the client when my data would be stale, while with Etag, server would check the etag value for client' each request.
ETag is used to determine whether a resource should use the copy one. and Expires Header like Cache-Control is told the client that before the cache decades, client should fetch the local resource.
In modern sites, There are often offer a file named hash, like app.98a3cf23.js, so that it's a good practice to use Expires Header. Besides this, it also reduce the cost of network.
Hope it helps ;)
Etag is a hash for indicating the version of the resource. When the server returns data, it hashes the data and set this hash value under ETAG. When you send a "PUT" request to the server to update a record, maybe simultaneously another user made the same "PUT" request and its request has been processed. The server will check your "PUT" data and will see that it is the same update so it wont make another update, it will send you the updated data (by another user) and you will update your cache.
when the time for caching expires, the browser automatically makes a new request to get the fresh data. That is why "Expires" header is used
If a response includes both an Expires header and a max-age directive,
the max-age directive overrides the Expires header, even if the
Expires header is more restrictive. This rule allows an origin server
to provide, for a given response, a longer expiration time to an
HTTP/1.1 (or later) cache than to an HTTP/1.0 cache. This might be
useful if certain HTTP/1.0 caches improperly calculate ages or
expiration times, perhaps due to desynchronized clocks.

Resources