Site running Google Cloud CDN -
According to numerous test is not Caching webp images - potentially all images.
This was corroborated with GTMetrix.
Initially, I used the Cloud CDN configuration of "Cache Static Content".
and later upgraded to "Use origin settings based on cache-control headers"
Current Cache Settings:
I am still seeing images, particularly webp not being cached by CDN.
I have also updated the .htaccess file to increase the TTL for webp.
Htaccess images TTL
How can I get these images cached properly in cloud CDN?
Just adding to elving's answer - the report is wrong - your site is using CDN for images too.
I've fallowed official documentation on troubleshooting GCP's CDN.
Just run curl -s -D - -o /dev/null https://mydoginsurance.com.au/images/banner.webp and you should get:
HTTP/2 200
server: nginx
date: Thu, 25 Feb 2021 12:26:01 GMT
content-type: image/webp
content-length: 130564
last-modified: Fri, 16 Oct 2020 10:12:50 GMT
etag: "5f897222-1fe04"
x-powered-by: PleskLin
accept-ranges: bytes
via: 1.1 google
cache-control: max-age=86400,public
alt-svc: clear
and when I ran it again a few minutes later:
HTTP/2 200
server: nginx
date: Thu, 25 Feb 2021 12:26:01 GMT
content-type: image/webp
content-length: 130564
last-modified: Fri, 16 Oct 2020 10:12:50 GMT
etag: "5f897222-1fe04"
x-powered-by: PleskLin
accept-ranges: bytes
via: 1.1 google
age: 223
cache-control: max-age=86400,public
alt-svc: clear
Third last line is age: 223 which means that this reponse was served from cache created 223 seconds ago;
The last response in this example includes an Age header. Cloud CDN adds an Age header to responses that it serves from cache. Here, the header indicates that the response was successfully served from cache by using a cache entry that was created two seconds ago.
That GTmetrix report is simply wrong. They apparently don't correctly detect use of Cloud CDN. I see cache hits from Cloud CDN for images such as /images/banner.webp.
There's information on troubleshooting cache misses at https://cloud.google.com/cdn/docs/troubleshooting-steps#responses-not-cached that you can use to double check.
Related
I have issues convincing Firefox 71 to cache a large (>4MB) image. I notice both in developer tools (as being logged) and during normal operations (as per loading delay) that the image is loaded every time the page is accessed.
Although I thought I provided all the necessary response headers, Firefox is not sending If-Modified-Since or If-None-Match request headers.
These are the HTTP headers my server is sending:
$ HEAD https://😉/image.png
200 OK
Cache-Control: public, max-age=31536000, immutable
Connection: close
Date: Sat, 04 Jan 2020 19:52:20 GMT
Accept-Ranges: bytes
ETag: "564cd5fb-4484b0"
Server: nginx/1.14.0 (Ubuntu)
Content-Length: 4490416
Content-Type: image/png
Last-Modified: Wed, 18 Nov 2015 19:48:11 GMT
Client-Date: Sat, 04 Jan 2020 19:52:20 GMT
Client-Peer: 😛
Client-Response-Num: 1
Client-SSL-Cert-Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Client-SSL-Cert-Subject: /CN=😉
Client-SSL-Cipher: ECDHE-RSA-CHACHA20-POLY1305
Client-SSL-Socket-Class: IO::Socket::SSL
The web page loads the image via JavaScript:
let mapImg = new Image();
mapImg.src = 'image.png';
I believe I did everything according to documentation and wonder if I made some wrong combination of response headers, encryption, compression, and loading method?
Context
My github pages are not refreshing. After diagnosing my conclusion is it's a server side caching effect.
What I did + diagnostic results
The site is working OK.
I made a change in index.html in my local
repo, then commit and push
I completely cleared my browser cache (btw also using cache clear plugins, and Chrome dev tools set not using cache)
Reloaded the page, with ctrl+f5 and ctrl+R (change is not applied)
Checked using github.com read index.html, the change is there, committed.
Monitored the traffic with Fiddler. The request for index.html sent, full response received, the content is the old NOT changed.
Examined the response header with Fiddler, says: (see header exhibit)
Reverse diagnostic
I've issued a request with a usual trick typeing: index.html?v001orAnythingYouWant and I got the new version of the page
Problem
Problem solved one can say, but it is not true. When I refresh images, css, js still this effect will prevent me to see the new result.
Question
How can I configure or overcome this server side caching, of course only for development/testing time?
Response header exhibit
HTTP/1.1 200 OK
Server: GitHub.com
Content-Type: text/html; charset=utf-8
Last-Modified: Fri, 06 May 2016 12:24:29 GMT
Access-Control-Allow-Origin: *
Expires: Fri, 06 May 2016 12:45:44 GMT
Cache-Control: max-age=600
X-GitHub-Request-Id: B91F111E:5AA6:47804:572C8F9F
Content-Length: 43752
Accept-Ranges: bytes
Date: Fri, 06 May 2016 12:35:57 GMT
Via: 1.1 varnish
Age: 13
Connection: keep-alive
X-Served-By: cache-fra1238-FRA
X-Cache: HIT
X-Cache-Hits: 1
Vary: Accept-Encoding
X-Fastly-Request-ID: 1758f53052edbfb40a0044407d53d5654ad1e983
I'm having a strange issue when I reload a page sequentially, sometimes it loads fine, but sometimes it loads with missing images/css, and sometimes it redirects to my site's 404 file. Below are 2 sequential curl commands which might be helpful.
For background, I've already cleared out the previous page slugs in the DB, and restored the .htaccss file. Any pointers are greatly appreciated!!
[my machine]:$ curl -s -D - [my url] -o /dev/null
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Date: Tue, 04 Mar 2014 21:44:50 GMT
Server: Apache
X-Powered-By: PHP/5.3.14
Content-Length: 6131
Connection: keep-alive
[my machine]:$ curl -s -D - [my url] -o /dev/null
HTTP/1.1 404 Not Found
Cache-Control: no-cache, must-revalidate, max-age=0
Content-Type: text/html; charset=UTF-8
Date: Tue, 04 Mar 2014 21:44:51 GMT
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Last-Modified: Tue, 04 Mar 2014 21:44:51 GMT
Pragma: no-cache
Server: Apache
X-Pingback: http://promotions.glamour.com/xmlrpc.php
X-Powered-By: PHP/5.3.14
transfer-encoding: chunked
Connection: keep-alive
Wordpress randomly returning 404
I had a similar problem and I discovered the fix. Leaving this for whoever may come to this page as I did in search of a solution.
Turned out that it was because some plug-ins were causing a spike in memory consumption and the shared hosting (Dreamhost) was killing requests that took too much memory, and returned a 404 error. In my log files I was seeing "Premature end of script headers".
I disabled all non-critical plug-ins and not only have the random 404s stopped, but the site is loading much faster overall.
So it turns out the solution had nothing to do with Wordpress.
Long story short - a second duplicate server had been quietly set up to handle large traffic spikes, and our build scripts knew nothing about that new server and weren't deploying to it.
Ahh the joys of troubleshooting at a large company...
I have a quick question but in advance I've read the RFC 2616 Chapter 14.22 about Host and HTTP Header but I still not understand where in httpd.conf or configuration file of a webserver should be changed? Please correct me if I'm wrong.
Look at following two HTTP GET I did to an Apache. The first one is GET for HTTP 1.0 , the other one is GET for HTTP 1.1. See the output:
HTTP/1.0 200 OK
Date: Thu, 24 Oct 2013 03:46:22 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Vary: *
Last-Modified: Fri, 10 Aug 2012 20:22:30 GMT
ETag: "17c815b-3b-50256d86"
Accept-Ranges: bytes
Content-Length: 59
Connection: close
Content-Type: text/html
<html>
<body>
<center>webli7</center>
</body>
</html>
HTTP/1.1 400 Bad Request
Date: Thu, 24 Oct 2013 04:04:40 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=iso-8859-1
16e
The HTTP protocol version is decided dynamicaly, not through configuration files. The client send a request specifying the highest protocol version that its support. Then, the server must respond with either the version requested by the client, or any earlier version that it prefers.
Since Apache does support HTTP/1.1, it should therefore match exactly the version provided by the client.
There exist a flag that you may set in Apache's config to force Apache to use HTTP/1.0 in certain situations, even though the browser requested HTTP/1.1. This is used to fix bugs in HTTP/1.1 handling of some very old browser. Today, you should not need to play with this flag.
As for your error, I would suggest that you make sure that your GET does provide the Host: header. This header is required in HTTP/1.1, yet optional in HTTP/1.0, and having it missing would certainly result in a 400 error.
I have a script on GAE that requests an XML feed from a partner that's typically 40MB but only 5MB gzipped. GAE is automatically unzipping this content and throwing an error that the response is too big:
HTTP response was too large: 46677241. The limit is: 33554432.
The script is setup to uncompress the response itself. How do I prevent GAE from getting in the way and breaking?
Here's the response header from my partner:
HTTP/1.0 200 OK
Expires: Wed, 27 Jun 2012 05:42:07 GMT
Cache-Control: max-age=10368000
Content-Type: application/x-gzip
Accept-Ranges: bytes
Last-Modified: Wed, 22 Feb 2012 11:06:09 GMT
Content-Length: 5263323
Date: Tue, 28 Feb 2012 05:42:07 GMT
Server: lighttpd
X-Cache: MISS from static01
X-Cache-Lookup: MISS from static01:80
Via: 1.0 static01:80 (squid)
Most likely your partner's server responds with plain XML, because it thinks that http-client sending requests (i.e. GAE URL Fetch service) does not support gzipping. Hence "response was too large" error.
To announce that you actually want to receive gzipped content you need to set Accept-Encoding: gzip header when using URL fetch service.