My application uses Varnish 3.0.2. I am facing a weird problem here. Some of the times the pages are served from Varnish with a HIT. But immediately after it returns MISS.
I was under the impression that once it gets served from the cache, it will continue to do so until the TTL expires. Am I wrong in understanding that?
Here are the two response headers for both the scenario:
HIT
HTTP/1.1 200 OK
Server: Apache/2.4.16 (Unix) mod_auth_kerb/5.4 PHP/5.3.29
X-Powered-By: PHP/5.3.29
X-Drupal-Cache: MISS
Content-Language: en
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
cache-control: max-age=86400, public
X-Cookie-Debug: Request cookie:
X-Request-URL: /org/31633421?unit=31633421
Content-Length: 11986
Accept-Ranges: bytes
Date: Wed, 24 Apr 2019 14:26:43 GMT
X-Varnish: 330015711 330015651
Via: 1.1 varnish
Connection: keep-alive
X-Varnish-Cache: HIT
X-Varnish-Cache-Hits: 1
X-Varnish-Age: 188
X-Varnish-Leg: 128.87.225.172
X-Varnish-Cache-Version: 3.0.2
MISS
HTTP/1.1 200 OK
Server: Apache/2.4.16 (Unix) mod_auth_kerb/5.4 PHP/5.3.29
X-Powered-By: PHP/5.3.29
X-Drupal-Cache: MISS
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: public, max-age=300
Content-Language: en
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
X-Cookie-Debug: Request cookie: _gat_UA-15166137-36=1
X-Request-URL: /org/31633421?unit=31633421
Content-Length: 11978
Accept-Ranges: bytes
Date: Wed, 24 Apr 2019 14:23:52 GMT
X-Varnish: 1900997574
Via: 1.1 varnish
Connection: keep-alive
X-Varnish-Cache: MISS
X-Varnish-Age: 0
X-Varnish-Leg: 128.87.225.158
X-Varnish-Cache-Version: 3.0.2
I have tried to increase the TTL value, remove all the cookies (including Google Analytics) but still it's behaving abruptly.
Any idea why?
Update
Seems like this is happening for including the following Google Tag manager JS code in my view template.
<script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-XXX');</script>
Turns out it WAS actually a problem in the VCL configuration with the regex I was using. I had not considered the non-alpha characters of the Google Ananlytics cookie. Modified the regex to _[_\-\.\=a-zA-Z0-9] and everything is fun again!
Hope this helps somebody.
My guess is that it comes from two different varnish servers based on the two response headers:
X-Varnish-Leg: 128.87.225.172
and
X-Varnish-Leg: 128.87.225.158
+1 for Ronald, also please consider upgrading to the latest Varnish 6 as years have passed since Varnish 3, many bugs have been fixed and improvements built. Furthermore V3 is end of life.
Related
We launched a web app about a week ago, experienced a heavy load spike and were down for almost 2 hours. I won't mention the company by name, but we were leaning on them for recommendations to prevent this exact thing.
They said that since we were using Varnish, we could handle the traffic influx quite easily. However, we didn't verify caching was working as intended. It was not.
TLDR: Our web app is sending Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 headers with requests and there's no indication of why that is.
Where can I look to prevent these headers from being sent?
PHP: 5.6
Nginx: 1.4.6
Varnish: 1.1
Wordpress: 4.6.12
Timber: 1.2.4
The linux admins we're working with have said they scoured the configs and haven't found anything specifying those headers except for AJAX requests.
#dont cache ajax requests
if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)")
Here's a curl from Pre-launch when we correctly configured Varnish to cache after forcing HTTPS(force-https plugin) on the site:
$ curl -Ik -H'X-Forwarded-Proto: *************
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Server: *****
Date: Sat, 03 Nov 2018 22:36:43 GMT
X-Varnish: 53061104
Age: 0
Via: 1.1 varnish
Connection: keep-alive
And from post launch:
curl -ILk ***********
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
X-Varnish: 691817320
Vary: Accept-Encoding
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Content-Type: text/html; charset=UTF-8
X-Server: ****
Date: Mon, 19 Nov 2018 19:17:02 GMT
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Pragma: no-cache
Transfer-Encoding: chunked
Accept-Ranges: bytes
Via: 1.1 varnish
Connection: Keep-Alive
Set-Cookie: X-Mapping-fjhppofk=33C486CB71216B67C5C5AB8F5E63769E; path=/
Age: 0
Force-https plugin: We activated this, updated the Varnish config to avoid a redirect loop and confirmed it was working a week prior to launch.
Plugins: These did not change, except for force-https.
Web App: It's an updated version of the previous app, complete redesign but nothing in the app from what I can tell is specifying no-store no-cache headers to be sent.
Where should I start? Thanks!
What is sending these headers is PHP engine.
It does so whenever you initiate a session, which clearly happens based on Set-Cookie presence.
Make sure that PHP sessions are initiated only when absolutely needed. By default, Varnish will not cache when response includes either Set-Cookie or "negative" Cache-Control, you have both.
So getting rid of extraneous session_start() and/or setcookie() calls is the key here.
You can find more info on when you can expect anti-caching headers sent here.
You need to fix the backend, but at the very least, you can strip the annoying header, and/or bypass it in varnish with this vcl snippet;
sub vcl_backend_response {
# kill the CC header so that application downstream don't see it
unset req.http.Cache-Control;
# alternatively, you can also override that header with
# set req.http.Cache-Control = "whatever string you desire";
# you can also force the TTL
beresp.ttl;
# also, if you return now, the builtin.vcl (https://github.com/varnishcache/varnish-cache/blob/master/bin/varnishd/builtin.vcl)
# doesn't get executed; this is generally the one deciding content is uncacheable
return (deliver);
}
I'm having a strange issue when I reload a page sequentially, sometimes it loads fine, but sometimes it loads with missing images/css, and sometimes it redirects to my site's 404 file. Below are 2 sequential curl commands which might be helpful.
For background, I've already cleared out the previous page slugs in the DB, and restored the .htaccss file. Any pointers are greatly appreciated!!
[my machine]:$ curl -s -D - [my url] -o /dev/null
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Date: Tue, 04 Mar 2014 21:44:50 GMT
Server: Apache
X-Powered-By: PHP/5.3.14
Content-Length: 6131
Connection: keep-alive
[my machine]:$ curl -s -D - [my url] -o /dev/null
HTTP/1.1 404 Not Found
Cache-Control: no-cache, must-revalidate, max-age=0
Content-Type: text/html; charset=UTF-8
Date: Tue, 04 Mar 2014 21:44:51 GMT
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Last-Modified: Tue, 04 Mar 2014 21:44:51 GMT
Pragma: no-cache
Server: Apache
X-Pingback: http://promotions.glamour.com/xmlrpc.php
X-Powered-By: PHP/5.3.14
transfer-encoding: chunked
Connection: keep-alive
Wordpress randomly returning 404
I had a similar problem and I discovered the fix. Leaving this for whoever may come to this page as I did in search of a solution.
Turned out that it was because some plug-ins were causing a spike in memory consumption and the shared hosting (Dreamhost) was killing requests that took too much memory, and returned a 404 error. In my log files I was seeing "Premature end of script headers".
I disabled all non-critical plug-ins and not only have the random 404s stopped, but the site is loading much faster overall.
So it turns out the solution had nothing to do with Wordpress.
Long story short - a second duplicate server had been quietly set up to handle large traffic spikes, and our build scripts knew nothing about that new server and weren't deploying to it.
Ahh the joys of troubleshooting at a large company...
This is an example response from my amazon bucket.
$ curl -I http://amazon_bucket/image.jpg
HTTP/1.1 200 OK
x-amz-id-2: Tmr9SynKe8ztlB/Jix1hNrclwyc/k4NVHyqK3B0vNKUoPFIxfzwALi0XQRwEjhzO
x-amz-request-id: DCFDBCF510988AFB
Date: Wed, 27 Mar 2013 13:06:34 GMT
Cache-Control: public, max-age=2629000
Expires: Wed, 26 Mar 2014 23:00:00 GMT
Last-Modified: Wed, 27 Mar 2013 13:00:19 GMT
ETag: "52dd53ea738c7824b3f67cfea6a3af2a"
Accept-Ranges: bytes
Content-Type: image/jpeg
Content-Length: 627046
Server: AmazonS3
I would expect the browser to cache the image and serve it from cache. Instead, when I reload the page, my browser does a request, which yield a 304 not modified response. Why is it acting like must-revalidate option was passed? Why isn't the browser serving the image directly from cache? The options I've configured on the image, from my S3 client are these:
Cache-Control: public, max-age=2629000
Expires: Wed, 26 Mar 2014 23:00:00 GMT
Is there some other option I should be passing to the S3 files? It might be a dumb answer, but I see that the requests my browser makes to get these pictures all have the following headers:
Cache-Control:no-cache
Pragma:no-cache
Why is my browser sending those?
I was hitting refresh, and apparently, this always triggers an If-Modified-Since request. If you visit the page normally, the asset is served from browser cache.
I have a site hosted on a dedicated server (2008 R2 IIS7.5) and every so often I get a garbled web page similar to the ones below. The entire screen is filled with garbage. There seems to no pattern to the issue and it happens to others as well. I could go for hours, maybe days surfing the site without seeing it though. They are all .aspx pages on the site. Any ideas?
HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html;
charset=utf-8 Content-Encoding: gzip Vary: Accept-Encoding Server:
Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET
Date: Thu, 10 Mar 2011 17:48:59 GMT Content-Length: 9874 ? ????? ??
I?%&/m?{ J?J??t? ? $?# ????? iG#)?*??eVe]f #????{???{??;?N'????\fd
l??J??!??? ?~| ?"~??7N ??O?U????`#??P??????l9??T?L?????????C?7N?????? - snip -
HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html;
charset=utf-8 Content-Encoding: gzip Vary: Accept-Encoding Server:
Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET
Date: Mon, 28 Mar 2011 20:23:42 GMT Content-Length: 10601 ‹ í½
I–%&/mÊ{ JõJ×àt¡ € $Ø # ìÁˆÍæ’ì iG#)«* ÊeVe]f #Ìí ¼÷Þ{ï½÷Þ{ï½÷º;
N'÷ßÿ?\fd löÎJÚÉž!€ªÈ ?~| ?"~ãä7N ÿ®O¿^ämFãiWÛù/Z —Ÿ}tR-[j²ýæz•
”Nå¯Ï>jówí]À?L§ó¬nòö³¢©¶
î?ÜÞý(½{ô¸,–oÓ:/?û¨i¯Ë¼™çyûQ:¯óóÏ>ªó¦Z×Ó¼¹;mš»ôWÞŽé· Ò–zQàü÷"Ÿ A˜Öy¾|O
- snip-
Without inspecting the server I would not be able to tell you much, because a lot of times you would just look for idiosyncrasies or differences in configuration.
All I can say - for what is worth - is that there seems to be something to do with the GZIP (and not the encoding), considering this is a new feature in IIS. Both cases have GZIP turned on. Now it could be that you have turned the feature on but I suggest turning it off for a while to see if it happens again. This could be in fact a bug in IIS.
This has been bugging me for a while now. Whenever I try to share my website link on Facebook or another link-sharing site, the link-sharing site either removes the URL (like it doesn't recognize it as valid) or in Facebook's case - it can't retrieve meta-data automatically.
I'm pretty sure that it used to work. However, Googling / StackOverflowing for this problem is a difficult task, since I have no idea what possibly could create this problem.
I've tried to create a static .HTM file on my website, and that works fine:
test.htm
My default home page is a classic ASP (yeah I know, PHP version in the works) which uses IIS 7 URL Rewrite module.
I've tried to check the resultcodes and headers for both test.htm and my default home page on this page: http://gsitecrawler.com/tools/Server-Status.aspx
This is the results:
test.htm
URL=http://www.orango.nu/test.htm
Result code: 200 (OK / OK)
Content-Type: text/html
Last-Modified: Fri, 04 Feb 2011 10:16:55 GMT
Accept-Ranges: bytes
ETag: "0d877a654c4cb1:0"
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Fri, 04 Feb 2011 10:40:08 GMT
Content-Length: 452
default home page /
URL=http://www.orango.nu
Result code: 200 (OK / OK)
Cache-Control: public
Content-Length: 13463
Content-Type: text/html; Charset=UTF-8
Accept-Ranges: bytes
Server: Microsoft-IIS/7.0
Set-Cookie: ASPSESSIONIDSCSADCAR=DLPBECCBGDJMADLEPMOMHDDC; path=/
X-Powered-By: ASP.NET
Date: Fri, 04 Feb 2011 10:24:22 GMT
The first 4 lines of my default.asp (/) file are:
Response.ContentType = "text/html"
Response.AddHeader "Content-Type", "text/html;charset=UTF-8"
Response.CodePage = 65001
Response.CharSet = "UTF-8"
Does anyone have an idea what could be wrong and/or how to fix it? Any help or advice would be much appreciated, because this is driving me to the edge of madness.
The content-type looks wrong on your homepage...
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 13463
Content-Type: text/html;charset=UTF-8,text/html; Charset=UTF-8
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Fri, 04 Feb 2011 10:48:39 GMT
I also don't see the need, at least for the homepage, for the cache-control: private header.