This is How Google Bot Fetched My Site - fetch

This is how a Google bot views my site -
HTTP/1.1 302 Found
Connection: close
Pragma: no-cache
cache-control: no-cache
Location: /LageX/
What does it mean? Is it good or bad? Thanks.

It's bad.
The above indicates that the content of your site is temporarily available at another location. Unless you have a good reason to set up a temporary (302) redirect, you should either move your content to where it is expected or set up a permanent (301) redirect.
The Location: header which is expected to hold the URI where the content is available is itself invalid, because its value is expected to be an absolute URI — something like http://domain.com/LageX/.

Related

How can user set no-cache on browser requests?

I understand, to some degree, the HTTP(S) response cache-control: headers, and associated controls for caching but the request cache-control: headers? How does a user control his own request headers? If users are using a normal browser, they have no ability to manually tweak any request parameters outside of those the URL itself indirectly generates.
How is request cache-control even a thing? Is it only intended for programmatically generated (curl, wget, JavaScript) HTTP(S) requests? or interaction between caches and origins?
Most browsers don't give a lot of fine-grained cache control to users. They'll let you clear any local cache, which is purely a local operation. Many will also let you request a page with caching disabled; see Force browser to refresh css, javascript, etc for details.
To give a specific example, in Firefox requesting a page will send headers:
GET /... HTTP/1.1
...
However, if I use 'Reload current page', the request will include cache-control headers to request uncached data from upstream:
GET /... HTTP/1.1
...
Cache-Control: max-age=0
...
Similarly for a resource on that page referenced through <img src...>.
GET /... HTTP/1.1
...
Accept: image/webp,*/*
...
Cache-Control: max-age=0
As you suggest, this isn't fine-grained control; I'm not aware of any browsers that allow anything as complex as choosing the max-age for regular browsing.
However, it is a good example of the general cache-control header interacting with the browser's user-facing functionality.

Why does URL shortening service send response with HTTP status codes 301 and caching headers that does not allow browser to cache?

Why does URL shortening services like goo.gl and bit.ly send url resolution responses with HTTP status codes 301 and caching headers that does not allow browser to actually cache? As a consequence, it always has to go to the shortening service even if the url is same which was previously resolved? In my opinion, 301 responses (permanent redirects) are meant to be cached. If not forever, than at the least for few minutes.
Relevant HTTP headers in a response from bit.ly
HTTP/1.1 301 Moved Permanently
Cache-Control: private, max-age=90
Relevant HTTP headders in a response from goo.gl
HTTP/1.1 301 Moved Permanently
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Mon, 01 Jan 1990 00:00:00 GMT
As #deceze has pointed out the reason for disabling/limiting caching is for tracking/analytics, but the reason for using permanent redirects (301) vs temporary redirect (302/307) is to ensure "link juice" or SEO value of the link goes unharmed. This way you can use the shortened links freely in everywhere without worrying about the devaluation of link quality and retain your site's search engine ranking for that page.

Identifying iframe-unfriendly sites in rails — even when x-frame-options is missing from header

Background:
I'm working on a rails app that will open up articles inside the app itself via an iframe (with a navbar at the top from my app, kind of like StumbleUpon). But I've noticed some websites that publish articles (examples: pitchfork.com, vox.com, theverge.com) prevent themselves from being loaded in an iframe by setting X-Frame-Options to SAMEORIGIN or DENY.
My current plan to work around this is to look at the header for the link and examine it to see if it contains X-Frame-Options. If so, I will set it to forego the iframe and just open up the original site in a new tab.
This method seems to work for some websites (like pitchfork.com) because when I request the header from pitchfork.com, I get the following:
server: nginx/1.4.6 (Ubuntu)
content-type: text/html; charset=utf-8
x-frame-options: SAMEORIGIN
date: Wed, 27 Jan 2016 17:47:54 GMT
x-varnish: 912263733 912263044
age: 8
via: 1.1 varnish
connection: keep-alive
Problem:
For some websites (like vox.com), when I load them in an iframe, the chrome developer console tells me that x-frame-options is preventing the site from loading in an iframe. But when I examine the header, x-frame-options is nowhere to be found! All I get is this:
server: nginx/1.6.2
date: Wed, 27 Jan 2016 17:26:15 GMT
content-type: text/html
content-length: 172
connection: close
How is vox.com doing this? For further clarification, I tried using this tool that I found in another stackoverflow post and it also failed to correctly detect that vox.com was blocking iframes via x-frame-options.
1) Is Vox able to set the x-frame-options somewhere other than the header? If the latter, how can I detect and find that?
2) Any other alternate strategies you recommend for detecting iframe-unfriendly sites so that I can have them set to open in a new tab instead?
Take a look at the network traffic recorded in the Chrome console. In your app, you're looking at the headers of the HTTP 301 Moved Permanently response, which then redirects you to the location that does return the X-Frame-Options: SAMEORIGIN header.
Other methods, such as the newer Content-Security-Policy header or JavaScript code, may be used by other websites to prevent iframe embedding. But in the case of vox.com, you're simply looking at the headers of the wrong response.

HTTP Headers: Controlling Cache and History Mechanism

I'm trying to figure out the best HTTP headers to send for four use cases. I'm hoping to come up with headers that do not depend on user agent / protocol version sniffing but I'll accept that if nothing else fits. All URLs are fetched through fully custom handler so I can select all headers as I like, this is all about intermediate proxies and user agents. If possible, this should be compatible with both HTTP/1.0 and HTTP/1.1 clients. If multiple solutions exists, the best one will be the shortest one when sent over the wire.
Static public content
All "Static public content" is stuff that HTTP is really all about: if the URL is the same, the content is the same. I can do this easily: for example, I put user profile icon into http://domain.com/profiles/xyz/icon/1234abcd where "1234abcd" is the SHA-1 of the file contents of the icon. If I change to icon in the future, I'll create a new URL and and modify all existing referrers that should use the new icon. What are the best headers to declare that this may be cached forever and may be shared? I'm currently thinking something along the lines:
Date: <current time>
Expires: <current time + one year>
Is this enough to allow caching by user agents and proxies? Do I need Last-Modified or Pragma?
Static non-public content
All "Static non-public content" is stuff that is static but may not be available to everybody. In fact, this content will be available only to selected logged in users (session is kept with session cookie holding session UUID). If the URL is the same, the content is the same. However, the response is not public. An use case could be an image shared to selected friends in a social network service. I'm currently thinking something along the lines:
Date: <current time>
Expires: <current time>
Cache-Control: private, max-age=<huge number>, s-maxage=0
Is this enough to allow caching by user agents and and disable proxies? Do I need Pragma?
Volatile public content
All "Volatile public content" is stuff that is volatile and available to everybody. Something like frontpage of http://slashdot.org/ when not logged in. The intent is to allow rapidly updating content in a non-changing URL. Note that I do NOT want to break the user agent history mechanism (that is, clicking something from a volatile page and then hitting the back button should not result in fetching the volatile page from the server -- however, clicking a link that goes to front page should fetch the resource from the server). I'm currently thinking something along the lines:
Date: <current time>
Expires: <current time>
Cache-Control: public, max-age=0, s-maxage=0
Is this enough to prevent caching but to allow history mechanism (back button)? I know that if I send Cache-Control: no-store, must-revalidate I can force reloading but this is not what I want because that will break the back button, too. Do I need Last-Modified or Pragma?
Even though this is public, it probably does not make sense to allow intermediate proxies to cache this because it's volatile.
Volatile non-public content
All "Volatile non-public content" is stuff that is volatile and not available to everybody (private). Something like frontpage of http://slashdot.org/ when you are logged in. The intent is to allow rapidly updating content in a non-changing URL. Note that I do NOT want to break the user agent history mechanism (that is, clicking something from a volatile page and then hitting the back button should not result in fetching the volatile page from the server -- however, clicking a link that goes to front page should fetch the resource from the server). I'm currently thinking something along the lines:
Date: <current time>
Expires: <current time>
Cache-Control: private, max-age=0, s-maxage=0
Is this enough to prevent caching but to allow history mechanism (back button)? Do I need Pragma?
Things that still need testing with my suggested headers:
Verify that private content will not be leaked through HTTP/1.0 proxies.
Verify that caching works correctly in proxies.
Verify that caching works correctly in user agents.
Verify that user agent history mechanism works in user agents (all cases).
Verify that following a link to a volatile page fetches fresh content from the server.
Verify all the results when using HTTPS instead of HTTP.
I'll answer my own question:
Static public content
Date: <current time>
Expires: <current time + one year>
Rationale: This is compatible with the HTTP/1.0 proxies and RFC 2616 Section 14: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21
The Last-Modified header is not needed for correct caching (because conforming user agents follow the Expires header) but may be included for the end user consumption. Including the Last-Modified header may also decrease the server data transfer in case user hits the Reload/Refresh button. If Last-Modified header is added, it should reflect real data instead of something invented up. If you want to decrease server data transfer (in case user hits Reload/Refresh button) and cannot include real Last-Modified header, you may add ETag header to allow conditional GET (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.26). If you already include Last-Modified also adding ETag is just waste. Note that Last-Modified is clearly superior because it's supported by HTTP/1.0 clients and proxies, too. A suitable value for ETag in case of dynamic pages is SHA-1 of the contents of the page/resource. Note that using Last-Modified or ETag will not help with the server load, only with the server outgoing internet pipe / data transfer rate.
Static non-public content
Date: <current time>
Expires: <current time>
Cache-Control: private, max-age=31536000, s-maxage=0
Vary: Cookie
Rationale: The Date and Expires headers are for HTTP/1.0 compatibility and because there's no sensible way to specify that the response is private, these headers communicate that the response may not be cached. The Cache-Control header tells that this response may be cached by private cache but shared cache may not cache the response. The s-maxage=0 is added because private may not be supported by all proxies that support Cache-Control (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3 - I have no idea which proxies are broken). The max-age is set to value of 60*60*24*365 (1 year) because the HTTP/1.1 specification does not define any upper limit for this parameter, I guess that this is implementation dependant. The Expires headers SHOULD be limited to one year in the future, so using the same logic here should be okay. The Vary: Cookie header is required because the session that is used to check if the visitor is allowed to see the content is transferred in a cookie; because the returned response depends on the cookie value the cache may not use cached response if cookie header is changed.
I might personally break the last part. By not including the Vary: Cookie header I can improve caching a lot. For example: I have a profile image at http://example.com/icon/12 which is returned only for selected authenticated users. I have a visitor X with session id 5f2 and I allow the image to that user. Visitor X logs out and then later logs in again. Now X has session id 2e8 stored in his session cookie. If I have Vary: cookie, the user agent of X cannot use the cached image and is forced to reload this to its cache. Because the content varies by Cookie, a conditional GET with last modification time cannot be used. I haven't tested if using ETag could help in this case because in that case, the server response would be the same (match the SHA-1 ETag computed from the contents of the response). Be warned that Internet Explorer (at least up to version 9) always forces conditional GET for resources that include Vary: Cookie even if suitable response were already in cache (source: http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx). This is because internal cache implementation of MSIE does not remember which Cookie it sent the first time so it cannot know if the current Cookie is the same one.
However, here's an example of a problem that is caused by dropping the Vary: Cookie header to show why this is indeed required for technically correct behavior: see the example above and imagine that after X has logged out, visitor Y logs in with the same user agent (the user agent may have been restarted between X and Y, it does not matter). If Y views a page that includes a link to http://example.com/icon/12 then Y will see the icon embedded inside the page even though Y wouldn't be able to see the icon if X had not been using the same user agent previously. In my case I don't consider this a big enough problem because Y would be able to access the icon manually by inspecting the user agent cache regardless of possibly added Vary: Cookie. However, this issue may prevent Y from noticing that he wouldn't technically have access to this content (this may be important e.g. if Y is co-authoring the content). If the content is considered sensitive, the server must send no-store regardless of the problems caused by this Cache-Control directive.
Here too, adding Last-Modified header will help with users hitting Reload/Refresh button (see discussion above).
Volatile public content
Date: <current time>
Expires: <current time>
Cache-Control: public, max-age=0, s-maxage=0
Last-Modified: <real-last-modification-time>
Rationale: Tell HTTP/1.0 clients and proxies that this response should be considered stale immediately. The Last-Modified time is included to allow skipping content data transmission when the resource is accessed again and client supports conditional GET. If the Last-Modified cannot be used, ETag may be used as a replacement (see discussion above). It's critical to use Last-Modified to allow conditional GET with HTTP/1.0 compatible clients.
If the content may be delayed even slightly, then Expires, max-age and s-maxage [sic] should be adjusted suitably. For example, adding 5 seconds to those might help a lot for highly popular site, as suggested by symcbean's answer. Note that unlike conditional GET, increasing the expiry time will decrease server load instead of just decreasing server outgoing data traffic (because the server will see less requests in total).
Volatile non-public content
Date: <current time>
Expires: <current time>
Cache-Control: private, max-age=0, s-maxage=0
Last-Modified: <real-last-modification-time>
Vary: Cookie
Rationale: Tell HTTP/1.0 clients and proxies that this response should be considered stale immediately. The Last-Modified time is included to allow skipping content data transmission when the resource is accessed again and client supports conditional GET. If the Last-Modified cannot be used, ETag may be used as a replacement (see discussion above). It's critical to use Last-Modified to allow conditional GET with HTTP/1.0 compatible clients. Also note that Cache-Control must not include no-cache, must-revalidate or no-store because using any of these directives will break the back button in at least one user agent. However, if the content the server is transferring contains sensitive material that should not be stored in permanent storage, the no-store flag MUST be used regardless of breaking the back button. Warning: note that the use of no-store cannot prevent sensitive material ending up on the hard disk without encryption if the operating system has swapping enabled and the swap is not encrypted! Also note that using no-store makes very little sense unless the connection is encrypted (HTTPS/SSL).
Mostly OK, however you do need to bear in mind that HTTP/1.0 proxies may cache content served up as
Cache-Control: private
So you should set an explicit Date-modified header as well as the expires header.
For your 'Static non-public content' you should add a 'Varies: Cookie' header.
For your 'Volatile public content': How fast is it changing? Setting an TTL of +5 seconds may offload a lot of effort from your servers.
For 'Volatile non-public content' you should probably add no-cache,must-revalidate to the Cache-control header.
Pragma headers issued from the server should have no effect on clients nor proxies.
Do test out what happens when your cache expires (IME you can end up with a system even slower than one accessed with no populated cache due to all the conditional requests / 304 responses)

How to cache an HTTP POST response?

I would like to create a cacheable HTTP response for a POST request.
My actual implementation responds the following for the POST request:
HTTP/1.1 201 Created
Expires: Sat, 03 Oct 2020 15:33:00 GMT
Cache-Control: private,max-age=315360000,no-transform
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Content-Length: 9
ETag: 2120507660800737950
Last-Modified: Wed, 06 Oct 2010 15:33:00 GMT
.........
But it looks like that the browsers (Safari, Firefox tested) are not caching the response.
In the HTTP RFC the corresponding part says:
Responses to this method are not cacheable unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource.
So I think it should be cached. I know I could set a session variable and set a cookie and do a 303 redirect, but I want to cache the response of the POST request.
Is there any way to do this?
P.S.: I've started with a simple 200 OK, so it does not work.
I'd also note that caching is always optional (it's a MAY in the HTTP/1.1 RFC). Since under most circumstances, a successful POST invalidates a cache entry, it's probably simply the case that the browser caches you're looking at just don't implement caching POST responses (since this would be pretty uncommon--usually this is accomplished by formatting things as a GET, which it sounds like you've done).
Short answer: POST caching rarely makes sense. A cache may serve GET requests to a URL which is the same as that of a previous POST, whose response came with a Content-Location header containing the POST's request URI.
From rfc-7231 (http-bis, superseding rfc-2616):
Responses to POST requests are only cacheable when they include
explicit freshness information (see Section 4.2.1 of [RFC7234]).
However, POST caching is not widely implemented. For cases where an
origin server wishes the client to be able to cache the result of a
POST in a way that can be reused by a later GET, the origin server
MAY send a 200 (OK) response containing the result and a
Content-Location header field that has the same value as the POST's
effective request URI (Section 3.1.4.2).
See also Mark Nottinghams Blog:
POSTs don't deal in representations of identified state, 99 times out
of 100. However, there is one case where it does; when the server goes
out of its way to say that this POST response is a representation of
its URI, by setting a Content-Location header that's the same as the
request URI. When that happens, the POST response is just like a GET
response to the same URI; it can be cached and reused -- but only for
future GET requests.
The rfc also describes a PRG sequence which has a similar effect, allowing the response cycle to a POST to fill the cache for a subsequent GET - which is probably more widely implemented.
Can you try to change the Cache-Control to public instead of private and see if it's working?

Resources