I was going through the firefox local cache folder and found a lot of files containing the X-cache header. Can someone explain the purpose of this header ?
thanks
CDN (Content Delivery Network) adds X-cache header to HTTP Response. X-cache:HIT means that your request was served by CDN, not origin servers. CDN is a special network designed to cache content, so that usr request served faster + to unload origin servers.
Prefix 'X' in X-Cache indicates that the header is not a standard HTTP Header Field. Also its meaning vary from one proxy implementation to another. A common place to find these header fields is in squid servers. Organizations and universities place proxy (squid) servers between their and
outer network. This serves two purposes. One of security, and other of caching more frequent web pages (in order to limit network traffic).
X-Cache corresponds to the result, whether the proxy has served the result from cache (HIT for yes, and MISS for no)
X-Cache-Lookup represents if the proxy has a cache-able response to the request (HIT for yes and MISS for no)
Both HITs means that the client has made a cache-able request and the proxy had a cache-able response that matched, and was forwarded back to the client.
In case X-Cache is MISS and X-Cache_Lookup is HIT, then the client made a request that had a cache-able response but was forced by the client to bypass the cache. This is hard refresh, which can be simulated by Ctrl + F5 or by sending headers:
Pragma: no-cache (in HTTP/1.0) and Cache-Control: no-cache
(HTTP/1.1)
If both are MISS(es) then the request by the client doesn't have any valid object corresponding to the request.
Some Useful Resources:
X-Cache and X-Cache-Lookup Headers
Understanding cache HIT and MISS headers with shielded services
X-Cache "is NOT a standard HTTP header field".
Also, check out X-Cache and X-Cache-Lookup headers explained.
for me me this was related to fastcgi cache header existing on Nginx server block
add_header X-Cache $upstream_cache_status;
just removing commenting this line and restart nginx the header were removed .
Related
I serve my site with SSL and I want to accept cross-origin requests only from sites with SSL as well.
Is it possible?
The basic logic that you need is:
IF the value of the Origin request header STARTS WITH https:
THEN include a Access-Control-Allow-Origin response header with the value copied from the Origin request header.
ELSE … don't.
The specifics of how you implement that pseudo-code will depend on what technology you are using on your server.
The server is receiving thousands of OPTIONS requests due to CORS (Cross-Origin Resource Sharing). Right now, every options request is being sent to one of the servers, which is a bit wasteful, knowing that HAProxy can add the CORS headers itself without the help of a web server.
frontend https-in
...
use_backend cors_headers if METH_OPTIONS
...
backend cors_headers
rspadd Access-Control-Allow-Origin:\ https://www.example.com
rspadd Access-Control-Max-Age:\ 31536000
However for this to work I need to specify at least one live server in cors_headers backend and that server will still receive the requests.
How can I handle the request in the backend without specifying any servers? How can I stop the propagation of the request to servers, while sending the response to the browser and keeping the connection alive?
Edit for HAProxy 2.2 and above: In case you need to support a whitelist of origins, Lua scripts can now generate the entire response without having to pass the request to the backend server. Sample Lua script with simple integration instructions can be found here: https://github.com/haproxytech/haproxy-lua-cors
The only way to do this is in HAProxy 1.5.14 is by manually triggering the 503 error (no servers available to handle the request) and setting the error page to the file with custom CORS headers.
backend cors_headers
errorfile 503 /path/to/custom/file.http
The file.http should contain the desired headers and 2 empty lines at the end
HTTP/1.1 200 OK
Access-Control-Allow-Origin: https://www.example.com
Access-Control-Max-Age: 31536000
Content-Length: 0
Cache-Control: private
<REMOVE THIS LINE COMPLETELY>
This "method" has a couple of limitations:
there is no way to check the origin before sending the CORS headers, so you will either have to have a static list of allowed origins or you will have to allow all origins
lack of dynamic headers: you can't do
http-response set-header Date %[date(),http_date]
or set Expires header.
Note: if you are updating the HTTP file dynamically over time, to apply the changes to the HAProxy you will have to restart it. It can be a graceful restart or a hard restart, in either case the new file will be loaded, cached and served immediately.
Good news, HAProxy 2.2 just introduced the "Native Response Generator" feature. It works with the http-request return directive, and can be used for serving static files or text strings, including dynamic parameters.
The goal is to avoid the usual hacks with errorfile.
Taking advantage of another directive introduced in version 2.2 (http-after-response), the OP goal could be achieved with the following:
backend cors_headers
# http-response won't work here as the response is generated by HAP
http-after-response set-header Access-Control-Allow-Origin \
"%[req.hdr(Origin)]"
http-after-response set-header Access-Control-Max-Age "31536000"
http-request return status 200 content-type "text/plain" string "" if TRUE
The set-header and http-request return can be made conditional with an if clause based on the request headers or origin, depending on your needs (see the doc for examples).
With this technique the headers and response can use variables:
http-request return status 200 content-type "text/plain" \
lf-string "Hello, you are: %[src]"
This should work in most versions of HAproxy:
backend cors_headers
http-request deny deny_status 200
Imagine we have http client, some proxy and web-server that serves as backend. The proxy is configured to cache the responses of the backend.
A request arrives, the proxy transfers it to the backend, the latter responses, the proxy caches the response and sends it to the client.
Imagine the backend has set some cache-related headers in its response to the proxy's request. For example:
Cache-Control: no-cache (or)
Cache-Control: max-age=100000 (or)
Expires: 'Next Friday'
A question is: Will the next request from the client be processed by the proxy in compliance with that headers?
Another flavour of the question: Is there a way for a proxy to understand that the resource is stale, except for its own static resource-lifetime settings?
Third variation: Can the client force the proxy to load the fresh resource version if the proxy's version of the resource is not considered stale by the proxy?
My question may look a little bit overgeneralized, not specific enough. I'll try to address this issue by working with browser + nginx proxy + nginx web-server setup. If my setup works correctly, in case some resource has been cached by proxy and nginx's proxy_cache_valid timeout is still on - nothing can prevent the proxy from serving a stale response; whatever I do the requests do not hit the backend.
It looks like nginx decides whether the cache is stale only basing on proxy_cache_valid setting, and the headers of backend's response do not matter at all. I'm wondering whether my guess is correct and whether it can be incorrect for some other http proxy setups, serving as reverse proxy such as nginx, office network internal proxy such as squid, internet public proxy.
I'm currently working on a system where a client makes HTTP 1.1 requests of an origin server. I control both the client and the server software, so have free reign over HTTP headers set. Between the client are multiple, hierarchical layers of web proxy / cache devices (think, Squid or similar).
The data served up by the origin is usually highly cacheable, and I intend to set HTTP response headers to indicate this. Specifically, I plan to use Cache-Control: public, max-age=<value>. I understand that this will mean that intermediate proxies will cache the response up to the specified max-age, at which point they will revalidate against the origin (presumably with a Last-Modified header, looking for a 304 response).
The problem I have is that the client might become aware that the data held by caches might now be invalid. In this case, I need the client to make a request which instructs the caches to either fetch or revalidate their response with the origin. If the origin response is now different, the cache should store this new response. In my mind, this would involve the client making the request, and each cache in the chain should revalidate its response with the next upstream device, all the way back to the origin. The new response can then be served from the closest cache which actually has it.
What's the correct HTTP headers that need to be set on the client request to achieve this? At first I thought that setting Cache-control: no-cache in the HTTP request would make this happen, but reading the RFC, it seems that this will instruct the intermediate caches to both go back to the origin (desired) but also not cache the new response (not desired). I then saw an article in which an HTTP request header of Cache-control: max-age=0 would perhaps do this, but I'm not sure.
Will max-age=0 do what I need here, or do I need some other combination of HTTP headers?
I asked a similar question here: How to make proxy revalidate resource from origin. I since learned that proxy revalidate wasn't supported by nginx at the time of writing. It is scheduled for the 1.5 release.
Sending max-age=0 from the client should trigger this revalidate mechanism in the proxy, if the original response from the origin contained the right cache control headers.
But whether your upstream server(s) will respect these headers and revalidate with their origin is clearly not something you can just assume. If you have control over your upstream servers I think it could work.
Also etag is preferred over modified since headers afaik.
I found these to be helpful articles on the subject:
caching tutorial
cache control directives
http specs on validation
section 14.9.4 on this spec
[UPDATE]
Nginx version 1.5.8 has been released since, and I can confirm that this mechanism is now working!
An HTTP server uses content-negotiation to serve a single URL identity- or gzip-encoded based on the client's Accept-Encoding header.
Now say we have a proxy cache like squid between clients and the httpd.
If the proxy has cached both encodings of a URL, how does it determine which to serve?
The non-gzip instance (not originally served with Vary) can be served to any client, but the encoded instances (having Vary: Accept-Encoding) can only be sent to a clients with the identical Accept-Encoding header value as was used in the original request.
E.g. Opera sends "deflate, gzip, x-gzip, identity, *;q=0" but IE8 sends "gzip, deflate". According to the spec, then, caches shouldn't share content-encoded caches between the two browsers. Is this true?
First of all, it's IMHO incorrect not to send "Vary: Accept-Encoding" when the entity indeed varies by that header (or its absence).
That being said, the spec currently indeed disallows serving the cached response to Opera, because the Vary header does not match per the definitions in HTTPbis, Part 6, Section 2.6. Maybe this is an area where we should relax the requirements for caches (you may want to follow up on the IETF HTTP mailing list...
UPDATE: turns out that this was already marked as an open question; I just added an issue in our issue tracker for it, see Issue 147.
Julian is right, of course. Lesson: Always send Vary: Accept-Encoding when sniffing Accept-Encoding, no matter what the response encoding.
To answer my question, if you mistakenly leave Vary out, if a proxy receives a non-encoded response (without Vary), it can simply cache and return this for every subsequent request (ignoring Accept-Encoding). Squid does this.
The big problem with leaving out Vary is that If the cache receives an encoded variant without Vary then it MAY send this in response to other requests even if their Accept-Encoding indicates the client can not understand the content.