TTFB from very high to very low Cloudflare - wordpress

We are using a wordpress setup with hosting on Google Cloud and Cloudflare.
In Cloudflare we are using the page cache feature which should help to decrease the TTFB substantially. What it basically does is to cache every static page and serves it to the client directly. What makes me wonder is that if I make a request in the morning the TTFB is like over 1 second. All requests after that the TTFB reduces to 70ms. That is a lot. It almost feels like a browser cache when I visit a website for the second time. But after some time the TTFB spikes again to over 1 second, almost as if Cloudflare drops the cache. That's why we additionally added the EDGE Cache TTL Time of 1 month, but still. I have those daily spikes and I think every user has a TTFB over 1 second when visiting our site for the first time.
Any guesses why this is so random?
This is the guide directly from cloudflare about the page cache:
https://support.cloudflare.com/hc/en-us/articles/236166048-Caching-Static-HTML-with-WordPress-WooCommerce
Appreciate your help

I believe Cloudflare doesn't cache universally, meaning that one retrieval for a cached static resource does not cache a copy on all cloudflare servers. In fact, I believe that cloudflare caches ray-wide in its caching implementation. It seems that the "1 second" TTFB is probably Cloudflare retrieving from your origin server and caching the result because it hasn't cached it for that ray yet.
Regardless of the above, it seems that the "1 second" TTFB is probably Cloudflare retrieving from your origin server and caching the result. To confirm this you can look at the response and there will be a CF-Cache-Status header that indicates HIT or MISS. You will probably see that it is always MISS for the 1 second+ requests. You should also see another header called CF-Ray that looks something like 5abb86fb2d6c9bc1-SJC where the SJC is the data center code. You should verify that this is a datacenter that is located geographically close to you to make sure that your DNS is set up correctly to get a nearby cloudflare server per the site list here: https://www.cloudflarestatus.com/

Related

Cloudflare optimization techniques (free plan)

OK, so I'm trying to benefit from the CF's free plan and squeeze as much as I can out of it. The main goal is to get the site served from the CF cache so it will load faster in the browser, if only for the first visit and search engines. It is a Wordpress so it can be a little slower than other sites.
So, to have CF cache properly I have set the following rules. You probably know that under the free plan 3 is the maximum:
https://example.com/wp-content/*
Browser Cache TTL: a year, Cache Level: Cache Everything, Edge Cache TTL: a month
https://example.com/wp-admin/*
Security Level: High, Cache Level: Bypass, Disable Apps, Disable Performance
https://example.com/*
Auto Minify: HTML, CSS & JS, Browser Cache TTL: 30 minutes, Cache Level: No Query String, Edge Cache TTL: 2 hours, Email Obfuscation: On, Automatic HTTPS Rewrites: On
Exactly in this order. These should allow CF to cache the files stored in the wp-content (uploads etc) for the maximum amount of time, then ignore and bypass the wp-admin and finally serve all the others (products in my case, blog articles, pages and so on) from its cache, although these should have a shorter time. I've also set the caching level in the Cloudflare dashboard to 'No query string'.
So far CF caches all the above and first time visitors or search engines should get a super fast page.
Next, I've added the following in the site's footer:
<script>jQuery(document).ready(function(){var e="?"+(new Date).getTime();jQuery("a").each(function(){jQuery(this).attr("href",jQuery(this).attr("href")+e)})})</script>
This script appends the current date to all links on the page. By doing this I want the visitor to get the latest version of the page (ie from my server), not the one stored by CF, because CF should not cache ULRs such as https://example.com/samplepage?234523445345 as it was instructed previously, in both the cache settings and the page rules.
Now, what I'm worried about is CF caching pages belonging to logged in members, such as account details. While the string javascript does work and the members would click a link such as /account?23456456 and therefore the page should not get cached, I have to wonder 'what if?'.
So, is there any better way to achieve what I am trying to (fast loading without caching members pages and sensitive details, such as shopping cart)? Or is this the maximum I can get out of the free plan?
In your case. Completely wordpress site? It is really very simple than other platforms to optimise. A new service called. Automatic Platform optimisation (APO). enable this in your cloudflare and install this in your wordpress plugin. Then connect the cloudflare to wordpress through APO.. And try to cache everything from your origin server. This will reduce the TTFB and RTT. This two will defenitely satisfy your site performance and speed.

Why is my website experiencing random slow api requests?

I have a VB.NET/Vue website hosted on an internal IIS 8.5 Windows 2012R2 Server. Our company has about 30 users using the site at any given time. The users are experiencing random delays throughout the day and on some days there's no delays (site works great most of the time). What I'm looking for is any suggestions on where to start looking to solve the issue. Here's what I've found so far.
User goes to site and initiates an api request from the UI
User sees a loading icon for anywhere up to a minute or so while the request returns
The request eventually reaches the server after some time and executes really fast within milliseconds and returns the response to the user
By this time, many users have already refreshed the page making new requests that succeed on page load. For the users that are patient and wait for the response, it eventually returns the response.
Here's some screenshots:
So to sum everything up, there are several users experiencing delays on a daily basis.
Some days we don’t have any delays, but most days we have several users experiencing multiple delays of several seconds to 30 seconds to 1 minute.
I’ve found all this using LogRocket and NewRelic and what is happening is all these requests are completing within milliseconds, but the request doesn’t seem to reach the server for some period of time.
I’ve been monitoring the CPU/Memory/Network on these servers and it all seems fine to me during when these issues occur.
It seems that the problem lies between the users computer and whatever hardware/software exists before reaching the web server.
Update here... Found that the problem is occurring on the users computer in all these instances. Using google chrome's performance api, I was able to track timing info for these requests and found that the problem is in the fetchStart. So whatever is happening here is the cause of the issue.
Example below:
entryType: resource
startTime: 1119531.820000033
duration: 56882.43999995757
initiatorType: xmlhttprequest
nextHopProtocol: http/1.1
workerStart: 0
redirectStart: 0
redirectEnd: 0
fetchStart: 1119531.820000033
domainLookupStart: 1176401.0199999902
domainLookupEnd: 1176402.2699999623
connectStart: 1176402.2699999623
connectEnd: 1176404.8350000521
secureConnectionStart: 1176403.6700000288
requestStart: 1176404.8549999716
responseStart: 1176413.5300000198
responseEnd: 1176414.2599999905
transferSize: 15145
encodedBodySize: 14884
decodedBodySize: 14884
serverTiming: []
workerTiming: []
fetchStart is at 1119531.820000033, then requestStart is at 1176404.8549999716 so the problem is something between fetchStart and requestStart. Still looking into what is causing this.
In 2022, we are experiencing something very similar with a small fraction of our customers. There is a significant gap between the timing api requestStart and the startTime. This gap can be up to 8 minutes -- I admire the patience of customers waiting that long. The wait periods are also close to multiples of a minute.
In our case, it appears that there is a (transparent?) proxy between those browsers and our server infrastructure which appears to be triggering the problem. In particular, it forces a downgrade of HTTP/2 to HTTP/1.1. Whitelisting our website in that proxy does solve the problem. This isn't a very satisfactory solution, but it does make the customer happier!
[UPDATE]
In our case, it turned out that we were sending a Content-Length header with a non-zero value on a 304 response. This is technically invalid and it caused problems with the proxy. This happened because of the Django CommonMiddleware which always puts a content-length header on responses. The solution was to add a new piece of middleware that strips out the content-length (and content) on a 304 response.
It turned out that the content was already being stripped by our nginx frontend, but it is better not to generate it in the first place.
And what was the content? -- in our case, it was the 4 characters 'null'!

How to display the cached version first and check the etag/modified-since later?

With caching headers I can either make the client not check online for updates for a certain period of time, and/or check for etags every time. What I do not know is whether I can do both: use the offline version first, but meanwhile in the background, check for an update. If there is a new version, it would be used next time the page is opened.
For a page that is completely static except for when the user changes it by themselves, this would be much more efficient than having to block checking the etag every time.
One workaround I thought of is using Javascript: set headers to cache the page indefinitely and have some Javascript make a request with an If-Modified-Since or something, which could then dynamically change the page. The big issue with this is that it cannot invalidate the existing cache, so it would have to keep dynamically updating the page theoretically forever. I'd also prefer to keep it pure HTTP (or HTML, if there is some tag that can do this), but I cannot find any relevant hits online.
A related question mentions "the two rules of caching": never cache HTML and cache everything else forever. Just to be clear, I mean to cache the HTML. The whole purpose of the thing I am building is for it to be very fast on very slow connections (high latency, low throughput, like EDGE). Every roundtrip saved is a second or two shaved off of loading time.
Update: reading more caching resources, it seems the Vary: Cookie header might do the trick in my case. I would like to know if there is a more general solution though, and I didn't really dig into the vary-header yet so I don't know yet if that works.
Solution 1 (HTTP)
There is a cache control extension stale-while-revalidate which describes exactly what you want.
When present in an HTTP response, the stale-while-revalidate Cache-
Control extension indicates that caches MAY serve the response in
which it appears after it becomes stale, up to the indicated number
of seconds.
If a cached response is served stale due to the presence of this
extension, the cache SHOULD attempt to revalidate it while still
serving stale responses (i.e., without blocking).
cache-control: max-age=60,stale-while-revalidate=86400
When browser firstly request the page it will cache result for 60s. During that 60s period requests are answered from the cache without contacting of the origin server. During next 86400s content will be served from the cache and fetched from origin server simultaneously. Only if both periods 60s+86400s are expired cache will not serve cached content but wait for origin server to fresh data.
This solution has only one drawback. I was not able to find any browser or intermediate cache which currently supports this cache control extension.
Solution 2 (Javascript)
Another solution is usage of Service workers with its feature to make custom responses to requests. With combination with Cache API it is enough to provide the requested feature.
The problem is that this solution will work only for browsers (not intermediate caches nor another http services) and even not all browsers supports Services workers and Cache API.

Cloudflare wait time over 20 seconds

I have been trying to find ways to speed up the load time of my site, so I have turned to CloudFlare to see if I can improve my load time.
My site is thelocalgolfer.com and I host it with hostmonster. I took three consecutive gtMetrix tests w/o cloudflare enabled and then enabled cloudflare ran three
consecutive gtMetrix tests w/ cloudflare enabled. You will see that with
cloudflare enabled it takes on average 21 seconds of wait time on the initial load. I have spent hours on the phone with hostmonster tech support trying to troubleshoot the problem and they said they have exhausted all options on their side.
Also to note when cloudflare is enabled one of the errors I have been
getting is
Error : cURL error 6: Resolving host timed out: www.thelocalgolfer.com
in the middle of the page after the page loads. The page still takes about 21 seconds
Try it yourself I still have it enabled (for now).
Here are the gtmetrix results with CloudFlare enabled:
http://gtmetrix.com/reports/www.thelocalgolfer.com/C3Yv7xNW
http://gtmetrix.com/reports/www.thelocalgolfer.com/Y35wcjzO
http://gtmetrix.com/reports/www.thelocalgolfer.com/x82tUdhH
Without Cloudflare enabled:
http://gtmetrix.com/reports/www.thelocalgolfer.com/NevlWuVV
http://gtmetrix.com/reports/www.thelocalgolfer.com/GDiEPUnG
http://gtmetrix.com/reports/www.thelocalgolfer.com/CcuvxYdq
By the way I have gone back and forth with CloudFlare and they have been less than helpful, they tell me to tweak this option or that option, and they take 24-48 hours to respond.
I am hoping that someone has experience with this issue and can help me out!
Thanks,
Neil
I actually posted some information in your support ticket relative to the issue. Doing a curl against the www record, with no CloudFlare in the middle, is returning a very large response time.
So for anyone that runs in to a similar problem, I was running simplepie on my site which was causing a loopback situation where the page was calling an RSS feed on the same domain.

akamai refresh cache before deployment and do cutover at specified time

My objective is to achieve zero downtime during deployment. My site uses akamai as CDN. Lets say I do have primary and secondary cluster of IIS servers. During deployment, the updates are made to secondary cluster. Before switchover from primary to secondary, can I request akamai to cache the content and do a cutover at a specified time?
The problem you are going to have is to guarantee that your content is cached on ALL akamai servers. Is the issue that you want to force content to be refreshed as soon as you cutover?
There are a few options here.
1 - Use a version in the requests "?v=1". This version would ALWAYS be requested from origin and would be appended to every request. As soon as you update your site, update the version on origin, so that the next request will append "?v=2" thus "busting" the cache and forcing an origin hit for all requests
2 - Change your akamai config to "honor webserver TTLs". You can then set very low or almost 0 TTLs right before you cut over and then increase gradually after you cutover
3 - Configure akamai to use If-MOdified-Since. This will force akamai to "validate" if any requests have changed.
4 - Use ECCU which can purge a whole directory, but this can take up to 40 minutes, but should be manageable during a maint window.
I don't think this would be possible based on my experience with Akamai (but things change faster than I can keep up with) - you can flush the content manually (at a cost) so you could flush /* we used to do this for particular files during deployments (never /* because we had over 1.2M URLs) but I can't see how Akamai could cache a non-visible version of your site for instant cut-over without having some secondary domain and origin.
However I have also found that Akamai are pretty good to deal with and it would definitely be worth contacting them in relation to a solution.

Resources