Django Static Files Won't Reload On Firefox [duplicate] - css

I have noticed that some browsers (in particular, Firefox and Opera) are very zealous in using cached copies of .css and .js files, even between browser sessions. This leads to a problem when you update one of these files, but the user's browser keeps on using the cached copy.
What is the most elegant way of forcing the user's browser to reload the file when it has changed?
Ideally, the solution would not force the browser to reload the file on every visit to the page.
I have found John Millikin's and da5id's suggestion to be useful. It turns out there is a term for this: auto-versioning.
I have posted a new answer below which is a combination of my original solution and John's suggestion.
Another idea that was suggested by SCdF would be to append a bogus query string to the file. (Some Python code, to automatically use the timestamp as a bogus query string, was submitted by pi..)
However, there is some discussion as to whether or not the browser would cache a file with a query string. (Remember, we want the browser to cache the file and use it on future visits. We only want it to fetch the file again when it has changed.)

This solution is written in PHP, but it should be easily adapted to other languages.
The original .htaccess regex can cause problems with files like json-1.3.js. The solution is to only rewrite if there are exactly 10 digits at the end. (Because 10 digits covers all timestamps from 9/9/2001 to 11/20/2286.)
First, we use the following rewrite rule in .htaccess:
RewriteEngine on
RewriteRule ^(.*)\.[\d]{10}\.(css|js)$ $1.$2 [L]
Now, we write the following PHP function:
/**
* Given a file, i.e. /css/base.css, replaces it with a string containing the
* file's mtime, i.e. /css/base.1221534296.css.
*
* #param $file The file to be loaded. Must be an absolute path (i.e.
* starting with slash).
*/
function auto_version($file)
{
if(strpos($file, '/') !== 0 || !file_exists($_SERVER['DOCUMENT_ROOT'] . $file))
return $file;
$mtime = filemtime($_SERVER['DOCUMENT_ROOT'] . $file);
return preg_replace('{\\.([^./]+)$}', ".$mtime.\$1", $file);
}
Now, wherever you include your CSS, change it from this:
<link rel="stylesheet" href="/css/base.css" type="text/css" />
To this:
<link rel="stylesheet" href="<?php echo auto_version('/css/base.css'); ?>" type="text/css" />
This way, you never have to modify the link tag again, and the user will always see the latest CSS. The browser will be able to cache the CSS file, but when you make any changes to your CSS the browser will see this as a new URL, so it won't use the cached copy.
This can also work with images, favicons, and JavaScript. Basically anything that is not dynamically generated.

Simple Client-side Technique
In general, caching is good... So there are a couple of techniques, depending on whether you're fixing the problem for yourself as you develop a website, or whether you're trying to control cache in a production environment.
General visitors to your website won't have the same experience that you're having when you're developing the site. Since the average visitor comes to the site less frequently (maybe only a few times each month, unless you're a Google or hi5 Networks), then they are less likely to have your files in cache, and that may be enough.
If you want to force a new version into the browser, you can always add a query string to the request, and bump up the version number when you make major changes:
<script src="/myJavascript.js?version=4"></script>
This will ensure that everyone gets the new file. It works because the browser looks at the URL of the file to determine whether it has a copy in cache. If your server isn't set up to do anything with the query string, it will be ignored, but the name will look like a new file to the browser.
On the other hand, if you're developing a website, you don't want to change the version number every time you save a change to your development version. That would be tedious.
So while you're developing your site, a good trick would be to automatically generate a query string parameter:
<!-- Development version: -->
<script>document.write('<script src="/myJavascript.js?dev=' + Math.floor(Math.random() * 100) + '"\><\/script>');</script>
Adding a query string to the request is a good way to version a resource, but for a simple website this may be unnecessary. And remember, caching is a good thing.
It's also worth noting that the browser isn't necessarily stingy about keeping files in cache. Browsers have policies for this sort of thing, and they are usually playing by the rules laid down in the HTTP specification. When a browser makes a request to a server, part of the response is an Expires header... a date which tells the browser how long it should be kept in cache. The next time the browser comes across a request for the same file, it sees that it has a copy in cache and looks to the Expires date to decide whether it should be used.
So believe it or not, it's actually your server that is making that browser cache so persistent. You could adjust your server settings and change the Expires headers, but the little technique I've written above is probably a much simpler way for you to go about it. Since caching is good, you usually want to set that date far into the future (a "Far-future Expires Header"), and use the technique described above to force a change.
If you're interested in more information on HTTP or how these requests are made, a good book is "High Performance Web Sites" by Steve Souders. It's a very good introduction to the subject.

Google's mod_pagespeed plugin for Apache will do auto-versioning for you. It's really slick.
It parses HTML on its way out of the webserver (works with PHP, Ruby on Rails, Python, static HTML -- anything) and rewrites links to CSS, JavaScript, image files so they include an id code. It serves up the files at the modified URLs with a very long cache control on them. When the files change, it automatically changes the URLs so the browser has to re-fetch them. It basically just works, without any changes to your code. It'll even minify your code on the way out too.

Instead of changing the version manually, I would recommend you use an MD5 hash of the actual CSS file.
So your URL would be something like
http://mysite.com/css/[md5_hash_here]/style.css
You could still use the rewrite rule to strip out the hash, but the advantage is that now you can set your cache policy to "cache forever", since if the URL is the same, that means that the file is unchanged.
You can then write a simple shell script that would compute the hash of the file and update your tag (you'd probably want to move it to a separate file for inclusion).
Simply run that script every time CSS changes and you're good. The browser will ONLY reload your files when they are altered. If you make an edit and then undo it, there's no pain in figuring out which version you need to return to in order for your visitors not to re-download.

I am not sure why you guys/gals are taking so much pain to implement this solution.
All you need to do if get the file's modified timestamp and append it as a querystring to the file.
In PHP I would do it as:
<link href="mycss.css?v=<?= filemtime('mycss.css') ?>" rel="stylesheet">
filemtime() is a PHP function that returns the file modified timestamp.

You can just put ?foo=1234 at the end of your CSS / JavaScript import, changing 1234 to be whatever you like. Have a look at the Stack Overflow HTML source for an example.
The idea there being that the ? parameters are discarded / ignored on the request anyway and you can change that number when you roll out a new version.
Note: There is some argument with regard to exactly how this affects caching. I believe the general gist of it is that GET requests, with or without parameters should be cachable, so the above solution should work.
However, it is down to both the web server to decide if it wants to adhere to that part of the spec and the browser the user uses, as it can just go right ahead and ask for a fresh version anyway.

I've heard this called "auto versioning". The most common method is to include the static file's modification time somewhere in the URL, and strip it out using rewrite handlers or URL configurations:
See also:
Automatic asset versioning in Django
Automatically Version Your CSS and JavaScript Files

The 30 or so existing answers are great advice for a circa 2008 website. However, when it comes to a modern, single-page application (SPA), it might be time to rethink some fundamental assumptions… specifically the idea that it is desirable for the web server to serve only the single, most recent version of a file.
Imagine you're a user that has version M of a SPA loaded into your browser:
Your CD pipeline deploys the new version N of the application onto the server
You navigate within the SPA, which sends an XMLHttpRequest (XHR) to the server to get /some.template
(Your browser hasn't refreshed the page, so you're still running version M)
The server responds with the contents of /some.template — do you want it to return version M or N of the template?
If the format of /some.template changed between versions M and N (or the file was renamed or whatever) you probably don't want version N of the template sent to the browser that's running the old version M of the parser.†
Web applications run into this issue when two conditions are met:
Resources are requested asynchronously some time after the initial page load
The application logic assumes things (that may change in future versions) about resource content
Once your application needs to serve up multiple versions in parallel, solving caching and "reloading" becomes trivial:
Install all site files into versioned directories: /v<release_tag_1>/…files…, /v<release_tag_2>/…files…
Set HTTP headers to let browsers cache files forever
(Or better yet, put everything in a CDN)
Update all <script> and <link> tags, etc. to point to that file in one of the versioned directories
That last step sounds tricky, as it could require calling a URL builder for every URL in your server-side or client-side code. Or you could just make clever use of the <base> tag and change the current version in one place.
† One way around this is to be aggressive about forcing the browser to reload everything when a new version is released. But for the sake of letting any in-progress operations to complete, it may still be easiest to support at least two versions in parallel: v-current and v-previous.

In Laravel (PHP) we can do it in the following clear and elegant way (using file modification timestamp):
<script src="{{ asset('/js/your.js?v='.filemtime('js/your.js')) }}"></script>
And similar for CSS
<link rel="stylesheet" href="{{asset('css/your.css?v='.filemtime('css/your.css'))}}">
Example HTML output (filemtime return time as as a Unix timestamp)
<link rel="stylesheet" href="assets/css/your.css?v=1577772366">

Don’t use foo.css?version=1!
Browsers aren't supposed to cache URLs with GET variables. According to http://www.thinkvitamin.com/features/webapps/serving-javascript-fast, though Internet Explorer and Firefox ignore this, Opera and Safari don't! Instead, use foo.v1234.css, and use rewrite rules to strip out the version number.

Here is a pure JavaScript solution
(function(){
// Match this timestamp with the release of your code
var lastVersioning = Date.UTC(2014, 11, 20, 2, 15, 10);
var lastCacheDateTime = localStorage.getItem('lastCacheDatetime');
if(lastCacheDateTime){
if(lastVersioning > lastCacheDateTime){
var reload = true;
}
}
localStorage.setItem('lastCacheDatetime', Date.now());
if(reload){
location.reload(true);
}
})();
The above will look for the last time the user visited your site. If the last visit was before you released new code, it uses location.reload(true) to force page refresh from server.
I usually have this as the very first script within the <head> so it's evaluated before any other content loads. If a reload needs to occurs, it's hardly noticeable to the user.
I am using local storage to store the last visit timestamp on the browser, but you can add cookies to the mix if you're looking to support older versions of IE.

The RewriteRule needs a small update for JavaScript or CSS files that contain a dot notation versioning at the end. E.g., json-1.3.js.
I added a dot negation class [^.] to the regex, so .number. is ignored.
RewriteRule ^(.*)\.[^.][\d]+\.(css|js)$ $1.$2 [L]

Interesting post. Having read all the answers here combined with the fact that I have never had any problems with "bogus" query strings (which I am unsure why everyone is so reluctant to use this) I guess the solution (which removes the need for Apache rewrite rules as in the accepted answer) is to compute a short hash of the CSS file contents (instead of the file datetime) as a bogus querystring.
This would result in the following:
<link rel="stylesheet" href="/css/base.css?[hash-here]" type="text/css" />
Of course, the datetime solutions also get the job done in the case of editing a CSS file, but I think it is about the CSS file content and not about the file datetime, so why get these mixed up?

For ASP.NET 4.5 and greater you can use script bundling.
The request http://localhost/MvcBM_time/bundles/AllMyScripts?v=r0sLDicvP58AIXN_mc3QdyVvVj5euZNzdsa2N1PKvb81 is for the bundle AllMyScripts and contains a query string pair v=r0sLDicvP58AIXN_mc3QdyVvVj5euZNzdsa2N1PKvb81. The query string v has a value token that is a unique identifier used for caching. As long as the bundle doesn't change, the ASP.NET application will request the AllMyScripts bundle using this token. If any file in the bundle changes, the ASP.NET optimization framework will generate a new token, guaranteeing that browser requests for the bundle will get the latest bundle.
There are other benefits to bundling, including increased performance on first-time page loads with minification.

For my development, I find that Chrome has a great solution.
https://superuser.com/a/512833
With developer tools open, simply long click the refresh button and let go once you hover over "Empty Cache and Hard Reload".
This is my best friend, and is a super lightweight way to get what you want!

Thanks to Kip for his perfect solution!
I extended it to use it as an Zend_view_Helper. Because my client run his page on a virtual host I also extended it for that.
/**
* Extend filepath with timestamp to force browser to
* automatically refresh them if they are updated
*
* This is based on Kip's version, but now
* also works on virtual hosts
* #link http://stackoverflow.com/questions/118884/what-is-an-elegant-way-to-force-browsers-to-reload-cached-css-js-files
*
* Usage:
* - extend your .htaccess file with
* # Route for My_View_Helper_AutoRefreshRewriter
* # which extends files with there timestamp so if these
* # are updated a automatic refresh should occur
* # RewriteRule ^(.*)\.[^.][\d]+\.(css|js)$ $1.$2 [L]
* - then use it in your view script like
* $this->headLink()->appendStylesheet( $this->autoRefreshRewriter($this->cssPath . 'default.css'));
*
*/
class My_View_Helper_AutoRefreshRewriter extends Zend_View_Helper_Abstract {
public function autoRefreshRewriter($filePath) {
if (strpos($filePath, '/') !== 0) {
// Path has no leading '/'
return $filePath;
} elseif (file_exists($_SERVER['DOCUMENT_ROOT'] . $filePath)) {
// File exists under normal path
// so build path based on this
$mtime = filemtime($_SERVER['DOCUMENT_ROOT'] . $filePath);
return preg_replace('{\\.([^./]+)$}', ".$mtime.\$1", $filePath);
} else {
// Fetch directory of index.php file (file from all others are included)
// and get only the directory
$indexFilePath = dirname(current(get_included_files()));
// Check if file exist relativ to index file
if (file_exists($indexFilePath . $filePath)) {
// Get timestamp based on this relativ path
$mtime = filemtime($indexFilePath . $filePath);
// Write generated timestamp to path
// but use old path not the relativ one
return preg_replace('{\\.([^./]+)$}', ".$mtime.\$1", $filePath);
} else {
return $filePath;
}
}
}
}

I have not found the client-side DOM approach creating the script node (or CSS) element dynamically:
<script>
var node = document.createElement("script");
node.type = "text/javascript";
node.src = 'test.js?' + Math.floor(Math.random()*999999999);
document.getElementsByTagName("head")[0].appendChild(node);
</script>

Say you have a file available at:
/styles/screen.css
You can either append a query parameter with version information onto the URI, e.g.:
/styles/screen.css?v=1234
Or you can prepend version information, e.g.:
/v/1234/styles/screen.css
IMHO, the second method is better for CSS files, because they can refer to images using relative URLs which means that if you specify a background-image like so:
body {
background-image: url('images/happy.gif');
}
Its URL will effectively be:
/v/1234/styles/images/happy.gif
This means that if you update the version number used, the server will treat this as a new resource and not use a cached version. If you base your version number on the Subversion, CVS, etc. revision this means that changes to images referenced in CSS files will be noticed. That isn't guaranteed with the first scheme, i.e. the URL images/happy.gif relative to /styles/screen.css?v=1235 is /styles/images/happy.gif which doesn't contain any version information.
I have implemented a caching solution using this technique with Java servlets and simply handle requests to /v/* with a servlet that delegates to the underlying resource (i.e. /styles/screen.css). In development mode I set caching headers that tell the client to always check the freshness of the resource with the server (this typically results in a 304 if you delegate to Tomcat's DefaultServlet and the .css, .js, etc. file hasn't changed) while in deployment mode I set headers that say "cache forever".

You could simply add some random number with the CSS and JavaScript URL like
example.css?randomNo = Math.random()

Google Chrome has the Hard Reload as well as the Empty Cache and Hard Reload option. You can click and hold the reload button (in Inspect Mode) to select one.

I recently solved this using Python. Here is the code (it should be easy to adopt to other languages):
def import_tag(pattern, name, **kw):
if name[0] == "/":
name = name[1:]
# Additional HTML attributes
attrs = ' '.join(['%s="%s"' % item for item in kw.items()])
try:
# Get the files modification time
mtime = os.stat(os.path.join('/documentroot', name)).st_mtime
include = "%s?%d" % (name, mtime)
# This is the same as sprintf(pattern, attrs, include) in other
# languages
return pattern % (attrs, include)
except:
# In case of error return the include without the added query
# parameter.
return pattern % (attrs, name)
def script(name, **kw):
return import_tag('<script %s src="/%s"></script>', name, **kw)
def stylesheet(name, **kw):
return import_tag('<link rel="stylesheet" type="text/css" %s href="/%s">', name, **kw)
This code basically appends the files time-stamp as a query parameter to the URL. The call of the following function
script("/main.css")
will result in
<link rel="stylesheet" type="text/css" href="/main.css?1221842734">
The advantage of course is that you do never have to change your HTML content again, touching the CSS file will automatically trigger a cache invalidation. It works very well and the overhead is not noticeable.

You can force a "session-wide caching" if you add the session-id as a spurious parameter of the JavaScript/CSS file:
<link rel="stylesheet" src="myStyles.css?ABCDEF12345sessionID" />
<script language="javascript" src="myCode.js?ABCDEF12345sessionID"></script>
If you want a version-wide caching, you could add some code to print the file date or similar. If you're using Java you can use a custom-tag to generate the link in an elegant way.
<link rel="stylesheet" src="myStyles.css?20080922_1020" />
<script language="javascript" src="myCode.js?20080922_1120"></script>

For ASP.NET I propose the following solution with advanced options (debug/release mode, versions):
Include JavaScript or CSS files this way:
<script type="text/javascript" src="Scripts/exampleScript<%=Global.JsPostfix%>" />
<link rel="stylesheet" type="text/css" href="Css/exampleCss<%=Global.CssPostfix%>" />
Global.JsPostfix and Global.CssPostfix are calculated by the following way in Global.asax:
protected void Application_Start(object sender, EventArgs e)
{
...
string jsVersion = ConfigurationManager.AppSettings["JsVersion"];
bool updateEveryAppStart = Convert.ToBoolean(ConfigurationManager.AppSettings["UpdateJsEveryAppStart"]);
int buildNumber = System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.Revision;
JsPostfix = "";
#if !DEBUG
JsPostfix += ".min";
#endif
JsPostfix += ".js?" + jsVersion + "_" + buildNumber;
if (updateEveryAppStart)
{
Random rand = new Random();
JsPosfix += "_" + rand.Next();
}
...
}

If you're using Git and PHP, you can reload the script from the cache each time there is a change in the Git repository, using the following code:
exec('git rev-parse --verify HEAD 2> /dev/null', $gitLog);
echo ' <script src="/path/to/script.js"?v='.$gitLog[0].'></script>'.PHP_EOL;

Simply add this code where you want to do a hard reload (force the browser to reload cached CSS and JavaScript files):
$(window).load(function() {
location.reload(true);
});
Do this inside the .load, so it does not refresh like a loop.

For development: use a browser setting: for example, Chrome network tab has a disable cache option.
For production: append a unique query parameter to the request (for example, q?Date.now()) with a server-side rendering framework or pure JavaScript code.
// Pure JavaScript unique query parameter generation
//
//=== myfile.js
function hello() { console.log('hello') };
//=== end of file
<script type="text/javascript">
document.write('<script type="text/javascript" src="myfile.js?q=' + Date.now() + '">
// document.write is considered bad practice!
// We can't use hello() yet
</script>')
<script type="text/javascript">
hello();
</script>

For developers with this problem while developing and testing:
Remove caching briefly.
"keep caching consistent with the file" .. it's way too much hassle ..
Generally speaking, I don't mind loading more - even loading again files which did not change - on most projects - is practically irrelevant. While developing an application - we are mostly loading from disk, on localhost:port - so this increase in network traffic issue is not a deal breaking issue.
Most small projects are just playing around - they never end-up in production. So for them you don't need anything more...
As such if you use Chrome DevTools, you can follow this disable-caching approach like in the image below:
And if you have Firefox caching issues:
Do this only in development. You also need a mechanism to force reload for production, since your users will use old cache invalidated modules if you update your application frequently and you don't provide a dedicated cache synchronisation mechanism like the ones described in the answers above.
Yes, this information is already in previous answers, but I still needed to do a Google search to find it.

It seems all answers here suggest some sort of versioning in the naming scheme, which has its downsides.
Browsers should be well aware of what to cache and what not to cache by reading the web server's response, in particular the HTTP headers - for how long is this resource valid? Was this resource updated since I last retrieved it? etc.
If things are configured 'correctly', just updating the files of your application should (at some point) refresh the browser's caches. You can for example configure your web server to tell the browser to never cache files (which is a bad idea).
A more in-depth explanation of how that works is in How Web Caches Work.

Just use server-side code to add the date of the file... that way it will be cached and only reloaded when the file changes.
In ASP.NET:
<link rel="stylesheet" href="~/css/custom.css?d=#(System.Text.RegularExpressions.Regex.Replace(File.GetLastWriteTime(Server.MapPath("~/css/custom.css")).ToString(),"[^0-9]", ""))" />
<script type="text/javascript" src="~/js/custom.js?d=#(System.Text.RegularExpressions.Regex.Replace(File.GetLastWriteTime(Server.MapPath("~/js/custom.js")).ToString(),"[^0-9]", ""))"></script>
This can be simplified to:
<script src="<%= Page.ResolveClientUrlUnique("~/js/custom.js") %>" type="text/javascript"></script>
By adding an extension method to your project to extend Page:
public static class Extension_Methods
{
public static string ResolveClientUrlUnique(this System.Web.UI.Page oPg, string sRelPath)
{
string sFilePath = oPg.Server.MapPath(sRelPath);
string sLastDate = System.IO.File.GetLastWriteTime(sFilePath).ToString();
string sDateHashed = System.Text.RegularExpressions.Regex.Replace(sLastDate, "[^0-9]", "");
return oPg.ResolveClientUrl(sRelPath) + "?d=" + sDateHashed;
}
}

You can use SRI to break the browser cache. You only have to update your index.html file with the new SRI hash every time. When the browser loads the HTML and finds out the SRI hash on the HTML page didn't match that of the cached version of the resource, it will reload your resource from your servers. It also comes with a good side effect of bypassing cross-origin read blocking.
<script src="https://jessietessie.github.io/google-translate-token-generator/google_translate_token_generator.js" integrity="sha384-muTMBCWlaLhgTXLmflAEQVaaGwxYe1DYIf2fGdRkaAQeb4Usma/kqRWFWErr2BSi" crossorigin="anonymous"></script>

Related

Laravel/blade caching css files

I am working on Nginx server, with PHP-FPM. I installed Laravel 4.1 and bootstrap v3.1.1., and here is the problem. For the last 30 minutes, I have been trying to change a css rule that I first declared to check boostrap.
.jumbotron{
background: red;
}
The first time it worked. The jumbotron container was red. So, I removed that css value and started working, but still no matter which browse I use, the container is red. I even checked the css file through the Google Chromes inspection tool, and it is showing me that first value when jumbotron had a background:red. I deleted the css file and renamed it and add new styles, I configured chrome not to cache pages. But Still the same value. I'm convinced now, that Laravel has kept a cache of the first style declaration.
Is there any way to disable this at all?
General explanation
When you access a Laravel Blade view, it will generate it to a temporary file so it doesn't have to process the Blade syntax every time you access to a view. These files are stored in app/storage/view with a filename that is the MD5 hash of the file path.
Usually when you change a view, Laravel regenerate these files automatically at the next view access and everything goes on. This is done by comparing the modification times of the generated file and the view's source file through the filemtime() function. Probably in your case there was a problem and the temporary file wasn't regenerated. In this case, you have to delete these files, so they can be regenerated. It doesn't harm anything, because they are autogenerated from your views and can be regenerated anytime. They are only for cache purposes.
Normally, they should be refreshed automatically, but you can delete these files anytime if they get stuck and you have problems like these, but as I said these should be just rare exceptions.
Code break down
All the following codes are from laravel/framerok/src/Illuminate/View/. I added some extra comments to the originals.
Get view
Starting from Engines/CompilerEngine.php we have the main code we need to understand the mechanics.
public function get($path, array $data = array())
{
// Push the path to the stack of the last compiled templates.
$this->lastCompiled[] = $path;
// If this given view has expired, which means it has simply been edited since
// it was last compiled, we will re-compile the views so we can evaluate a
// fresh copy of the view. We'll pass the compiler the path of the view.
if ($this->compiler->isExpired($path))
{
$this->compiler->compile($path);
}
// Return the MD5 hash of the path concatenated
// to the app's view storage folder path.
$compiled = $this->compiler->getCompiledPath($path);
// Once we have the path to the compiled file, we will evaluate the paths with
// typical PHP just like any other templates. We also keep a stack of views
// which have been rendered for right exception messages to be generated.
$results = $this->evaluatePath($compiled, $data);
// Remove last compiled path.
array_pop($this->lastCompiled);
return $results;
}
Check if regeneration required
This will be done in Compilers/Compiler.php. This is an important function. Depending on the result it will be decided whether the view should be recompiled. If this returns false instead of true that can be a reason for views not being regenerated.
public function isExpired($path)
{
$compiled = $this->getCompiledPath($path);
// If the compiled file doesn't exist we will indicate that the view is expired
// so that it can be re-compiled. Else, we will verify the last modification
// of the views is less than the modification times of the compiled views.
if ( ! $this->cachePath || ! $this->files->exists($compiled))
{
return true;
}
$lastModified = $this->files->lastModified($path);
return $lastModified >= $this->files->lastModified($compiled);
}
Regenerate view
If the view is expired it will be regenerated. In Compilers\BladeCompiler.php we see that the compiler will loop through all Blade keywords and finally give back a string that contains the compiled PHP code. Then it will check if the view storage path is set and save the file there with a filename that is the MD5 hash of the view's filename.
public function compile($path)
{
$contents = $this->compileString($this->files->get($path));
if ( ! is_null($this->cachePath))
{
$this->files->put($this->getCompiledPath($path), $contents);
}
}
Evaluate
Finally in Engines/PhpEngine.php the view is evaluated. It imports the data passed to the view with extract() and include the file with the passed path in a try and catch all exceptions with handleViewException() that throws the exception again. There are some output buffering too.
Same issue here. I am using VirtualBox with Shared Folders pointing to my document root.
This pointed me in the right direction:
https://stackoverflow.com/a/26583609/1036602
Which led me to this:
http://www.danhart.co.uk/blog/vagrant-virtualbox-modified-files-not-updating-via-nginx-apache
and this:
https://forums.virtualbox.org/viewtopic.php?f=1&t=24905
If you're mounting your local dev root via vboxsf Shared Folders, set EnableSendFile Off in your apache2.conf (or sendfile off if using Nginx).
For what it's worth and because this answer came up first in my google search...
I had the same problem. The CSS and JS files wouldn't update. Deleting the cache files didn't work. The timestamps were not the problem. The only way I could update them was to change the filename, load it directly to get the 404 error, and then change the name back to the original name.
In the end the problem was not related to Laravel or the browser cache at all. The problem was due to NginX using sendfile which doesn't work with remote file systems. In my case, I was using VirtualBox for the OS and the remote file system was vboxsf through Guest Additions.
I hope this saves someone else some time.
In Laravel 5.8+ you can use so:
The version method will automatically append a unique hash to the filenames of all compiled files, allowing for more convenient cache busting:
mix.js('resources/js/app.js', 'public/js').version();
After generating the versioned file, you won't know the exact file name. So, you should use Laravel's global mix function within your views to load the appropriately hashed asset. The mix function will automatically determine the current name of the hashed file:
<script src="{{ mix('/js/app.js') }}"></script>
full document: https://laravel.com/docs/5.8/mix

Fix serialized data broken due to editing MySQL database in a text editor?

Background: I downloaded a *.sql backup of my WordPress site's database, and replaced all instances of the old database table prefix with a new one (e.g. from the default wp_ to something like asdfghjkl_).
I've just learnt that WordPress uses serialized PHP strings in the database, and what I did will have messed with the integrity of the serialized string lengths.
The thing is, I deleted the backup file just before I learnt about this (as my website was still functioning fine), and installed a number of plugins since. So, there's no way I can revert back, and I therefore would like to know two things:
How can I fix this, if at all possible?
What kind of problems could this cause?
(This article states that, a WordPress blog for instance, could lose its settings and widgets. But this doesn't seem to have happened to me as all the settings for my blog are still intact. But I have no clue as to what could be broken on the inside, or what issues it'd pose in the future. Hence this question.)
Visit this page: http://unserialize.onlinephpfunctions.com/
On that page you should see this sample serialized string: a:1:{s:4:"Test";s:17:"unserialize here!";}. Take a piece of it-- s:4:"Test";. That means "string", 4 characters, then the actual string. I am pretty sure that what you did caused the numeric character count to be out of sync with the string. Play with the tool on the site mentioned above and you will see that you get an error if you change "Test" to "Tes", for example.
What you need to do is get those character counts to match your new string. If you haven't corrupted any of the other encoding-- removed a colon or something-- that should fix the problem.
I came to this same problem after trying to change the domain from localhost to the real URL. After some searching I found the answer in Wordpress documentation:
https://codex.wordpress.org/Moving_WordPress
I will quote what is written there:
To avoid that serialization issue, you have three options:
Use the Better Search Replace or Velvet Blues Update URLs plugins if you can > access your Dashboard.
Use WP-CLI's search-replace if your hosting provider (or you) have installed WP-CLI.
Run a search and replace query manually on your database. Note: Only perform a search and replace on the wp_posts table.
I ended up using WP-CLI which is able to replace things in the database without breaking serialization: http://wp-cli.org/commands/search-replace/
I know this is an old question, but better late than never, I suppose. I ran into this problem recently, after inheriting a database that had had a find/replace executed on serialized data. After many hours of researching, I discovered that this was because the string counts were off. Unfortunately, there was so much data with lots of escaping and newlines and I didn't know how to count in some cases and I had so much data that I needed something automated.
Along the way, I stumbled across this question and Benubird's post helped put me on the right path. His example code did not work in production use on complex data, containing numerous special characters and HTML, with very deep levels of nesting, and it did not properly handle certain escaped characters and encoding. So I modified it a bit and spent countless hours working through additional bugs to get my version to "fix" the serialized data.
// do some DB query here
while($res = db_fetch($qry)){
$str = $res->data;
$sCount=1; // don't try to count manually, which can be inaccurate; let serialize do its thing
$newstring = unserialize($str);
if(!$newstring) {
preg_match_all('/s:([0-9]+):"(.*?)"(?=;)/su',$str,$m);
# preg_match_all("/s:([0-9]+):(\"[^\"\\\\]*(?:\\\\.[^\"\\\\]*)*\")(?=;)/u",$str,$m); // alternate: almost works but leave quotes in $m[2] output
# print_r($m); exit;
foreach($m[1] as $k => $len) {
/*** Possibly specific to my case: Spyropress Builder in WordPress ***/
$m_clean = str_replace('\"','"',$m[2][$k]); // convert escaped double quotes so that HTML will render properly
// if newline is present, it will output directly in the HTML
// nl2br won't work here (must find literally; not with double quotes!)
$m_clean = str_replace('\n', '<br />', $m_clean);
$m_clean = nl2br($m_clean); // but we DO need to convert actual newlines also
/*********************************************************************/
if($sCount){
$m_new = $m[0][$k].';'; // we must account for the missing semi-colon not captured in regex!
// NOTE: If we don't flush the buffers, things like <img src="http://whatever" can be replaced with <img src="//whatever" and break the serialize count!!!
ob_end_flush(); // not sure why this is necessary but cost me 5 hours!!
$m_ser = serialize($m_clean);
if($m_new != $m_ser) {
print "Replacing: $m_new\n";
print "With: $m_ser\n";
$str = str_replace($m_new, $m_ser, $str);
}
}
else{
$m_len = (strlen($m[2][$k]) - substr_count($m[2][$k],'\n'));
if($len != $m_len) {
$newstr='s:'.$m_len.':"'.$m[2][$k].'"';
echo "Replacing: {$m[0][$k]}\n";
echo "With: $newstr\n\n";
$str = str_replace($m_new, $newstr, $str);
}
}
}
print_r($str); // this is your FIXED serialized data!! Yay!
}
}
A little geeky explanation on my changes:
I found that trying to count with Benubird's code as a base was too inaccurate for large datasets, so I ended up just using serialize to be sure the count was accurate.
I avoided the try/catch because, in my case, the try would succeed but just returned an empty string. So, I check for empty data instead.
I tried numerous regex's but only a mod on Benubird's would accurately handle all cases. Specifically, I had to modify the part that checked for the ";" because it would match on CSS like "width:100%; height:25px;" and broke the output. So, I used a positive lookahead to only match when the ";" was outside of the set of double quotes.
My case had lots of newlines, HTML, and escaped double quotes, so I had to add a block to clean that up.
There were a couple of weird situations where data would be replaced incorrectly by the regex and then the serialize would count it incorrectly as well. I found NOTHING on any sites to help with this and finally thought it might be related to caching or something like that and tried flushing the output buffer (ob_end_flush()), which worked, thank goodness!
Hope this helps someone... Took me almost 20 hours including the research and dealing with weird issues! :)
This script (https://interconnectit.com/products/search-and-replace-for-wordpress-databases/) can help to update an sql database with proper URLs everywhere, without encountering serialized data issues, because it will update the "characters count" that could throw your URLs out of sync whenever serialized data occurs.
The steps would be:
if you already have imported a messed up database (widgets not
working, theme options not there, etc), just drop that database
using PhpMyAdmin. That is, remove everything on it. Then export and
have at hand an un-edited dump of the old database.
Now you have to import the (un-edited) old database into the
newly created one. You can do this via an import, or copying over
the db from PhpMyAdmin. Notice that so far, we haven't done any
search and replace yet; we just have an old database content and
structure into a new database with its own user and password. Your site will be probably unaccessible at this point.
Make sure you have your WordPress files freshly uploaded to the
proper folder on the server, and edit your wp-config.php to make it
connect with the new database.
Upload the script into a "secret" folder - just for security
reasons - at the same level than wp-admin, wp-content, and wp-includes. Do not forget to remove it all once the search and
replace have taken place, because you risk to offer your DB details
open to the whole internet.
Now point your browser to the secret folder, and use the script's fine
interface. It is very self-explanatory. Once used, we proceed to
completely remove it from the server.
This should have your database properly updated, without any serialized data issues around: the new URL will be set everywhere, and serialized data characters counts will be accordingly updated.
Widgets will be passed over, and theme settings as well - two of the typical places that use serialized data in WordPress.
Done and tested solution!
If the error is due to the length of the strings being incorrect (something I have seen frequently), then you should be able to adapt this script to fix it:
foreach($strings as $key => $str)
{
try {
unserialize($str);
} catch(exception $e) {
preg_match_all('#s:([0-9]+):"([^;]+)"#',$str,$m);
foreach($m[1] as $k => $len) {
if($len != strlen($m[2][$k])) {
$newstr='s:'.strlen($m[2][$k]).':"'.$m[2][$k].'"';
echo "len mismatch: {$m[0][$k]}\n";
echo "should be: $newstr\n\n";
$strings[$key] = str_replace($m[0][$k], $newstr, $str);
}
}
}
}
I personally don't like working in PHP, or placing my DB credentials in an public file. I created a ruby script to fix serializations that you can run locally:
https://github.com/wsizoo/wordpress-fix-serialization
Context Edit:
I approached fixing serialization by first identifying serialization via regex, and then recalculating the byte size of the contained data string.
$content_to_fix.gsub!(/s:([0-9]+):\"((.|\n)*?)\";/) {"s:#{$2.bytesize}:\"#{$2}\";"}
I then update the specified data via an escaped sql update query.
escaped_fix_content = client.escape($fixed_content)
query = client.query("UPDATE #{$table} SET #{$column} = '#{escaped_fix_content}' WHERE #{$column_identifier} LIKE '#{$column_identifier_value}'")

Request multiple S3 files into one request

Is there a way to get multiple files from S3 (or any CDN) in one request?
For example, I have four files on S3:
example.com/cdn/one.js
example.com/cdn/two.js
example.com/cdn/three.js
example.com/cdn/four.js
I would like to be able to request any combination of them at a time. I currently have to include them separately:
<script src="example.com/cdn/one.js" />
<script src="example.com/cdn/two.js" />
But I would like to include them as one request:
example.com/cdn/code.js?one&two
I've considered combining the needed combinations into single files, but there will be way too many combinations for that to be realistic. I've also considered combining all of them into one file, but that would be ridiculously large.
First of all: Your question is more a JS thing than related to S3 / CDNs in general. And to answer your question: You can't "combine" them in one http request.
If you don't want to concatenate and minify them you should take a look at RequireJS, which could handle loading of your scripts – it's async and won't block the browser from rendering.
This is a really great article about async loading: css-tricks.com - Thinking Async which will give you a better insight.
EDIT after comment:
You can in fact use PHP to concatenate & minify your JS files on the fly, but this will put additional load on your server and you'll loose the benefits you get from a CDN … Another approach would be using a build system, which packs everything before it goes into production.
For further information on this topic take a look at the following links:
PHP5 - minify
PHP5 - Assetic
How do I concatenate JavaScript files into one file?
To do on the fly minifaction on your dev system (if it's a Mac) you may want to try CodeKit.
Here is one way you could do that (assuming you just use one&two&three and note file=one&file=two&file=3):
function getJs() {
var url = window.location.toString();
var query = url.split("?");
var params = query.split("&");
// iterate through array of params and write JS
for (i=0; i<params.length; i++) {
var name = params[i] + ".js";
document.write("<script defer language='javascript' type='text/javascript' src='" + name + "'><\/sc" + "ript>");
}
}
This should load after the page is loaded. Then in in your HTML perhaps. I haven't tested this but should be what you're looking for I reckon.

How can I optimize the Microsoft AJAX Toolkit?

This is rather infuriating. I'm trying to optimize a very large site, and I'm at the step of reducing HTTP Requests. Microsoft is not cooperating. I have the following ScriptResources included. I'll try and grab a top-line for each to distinguish them
// Name: MicrosoftAjax.debug.js 53.5Kb
// Name: MicrosoftAjaxWebForms.debug.js 14Kb
AjaxControlToolkit.BoxSide = function() { 11.4Kb
/// Sys.UI.DomElement 958 Bytes!
// Sys.Timer 982 Bytes!
// IDropSource 6.5Kb
AjaxControlToolkit.FloatingBehavior = function(element) { 2.2Kb
AjaxControlToolkit.BehaviorBase = function(element) { 5.4Kb
AjaxControlToolkit.DynamicPopulateBehavior = function(element) { 2.9Kb
AjaxControlToolkit.BoxCorners = function() { 3.6Kb
AjaxControlToolkit.DropShadowBehavior = function(element) { 3.4Kb
AjaxControlToolkit.ModalPopupBehavior = function(element) { 5.5Kb
Come on! 12 Bloody javascript includes! Less than a KILOBYTE! Half the time to get the dang data is probably spent asking for it! ARGHHH!
Anyway, as you can see, I am annoyed. Is there some way I can roll these up, and combine them? Like into one request?
Just replace <asp:ScriptManager ... />
with <ajaxToolkit:ToolkitScriptManager> ... />
in your ASPX page and you're
done!
http://blogs.msdn.com/delay/archive/2007/06/11/script-combining-made-easy-overview-of-the-ajax-control-toolkit-s-toolkitscriptmanager.aspx
You can write a HTTPModule to grab all the the axd/js files from the response, combine them into one, and send them to the client's browser through a request to an HTTPHandler.
You can take a look at Mads Kristensen's site to know what I am talking about. There are a lot many articles/workarounds for problems like yours.
check this:
Fast ASP.NET web page loading by downloading multiple javascripts in batch
Also, one common mistake is setting <compilation debug=”true”/> in some of the sites I have seen. As per Scott Gu,
When <compilation debug=”false”/> is set, the WebResource.axd handler will automatically set a long cache policy on resources retrieved via it – so that the resource is only downloaded once to the client and cached there forever (it will also be cached on any intermediate proxy servers).
The question is how to minimize the overall requests to your server....
personally, i think its not a problem, if these js files are loaded ONCE.
because they should be cached by the client. maybe you want to investigate, why this does not happen.
and secondly there is a tag at the ScriptManager, that limits your js files....
maybe you want to look into the CompositeScript tag...
addditionally i would suggest looking into the LoadScriptsBevoreUI attribute and set it to false. then your content gets loaded bevore the javascripts.
Use CombineScripts=true inside your ToolkitScriptManager

Preventing Flex application caching in browser (multiple modules)

I have a Flex application with multiple modules.
When I redeploy the application I was finding that modules (which are deployed as separate swf files) were being cached in the browser and the new versions weren't being loaded.
So i tried the age old trick of adding ?version=xxx to all the modules when they are loaded. The value xxx is a global parameter which is actually stored in the host html page:
var moduleSection:ModuleLoaderSection;
moduleSection = new ModuleLoaderSection();
moduleSection.visible = false;
moduleSection.moduleName = moduleName + "?version=" + MySite.masterVersion;
In addition I needed to add ?version=xxx to the main .swf that was being loaded. Since this is done by HTML I had to do this by modifying my AC_OETags.js file as below :
function AC_FL_RunContent(){
var ret =
AC_GetArgs
( arguments, ".swf?mv=" + getMasterVersion(), "movie", "clsid:d27cdb6e-ae6d-11cf-96b8-444553540000"
, "application/x-shockwave-flash"
);
AC_Generateobj(ret.objAttrs, ret.params, ret.embedAttrs);
}
This is all fine and works great. I just have a hard time believing that Adobe doesn't already have a way to handle this. Given that Flex is being targeted to design modular applications for business I find it especially surprising.
What do other people do? I need to make sure my application reloads correctly even if someone has once per session selected for their 'browser cache checking policy'.
I had a similar problem, and ended up putting the SWF files in a sub-directory named as the build number. This meant that the URL to the SWF files pointed to a different location each time.
Ideally this should be catered for by the platform, but no joy there. But this works perfectly for us, and integrates very easily into our automated builds with Hudson - no complaints so far.
Flex says:
http://www.adobe.com/livedocs/flex/2/docs/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00001388.html
What I have done is checksum the SWF file and then add that to its url. Stays the same until the file is rebuilt/redeployed. Handled automagically by a few lines of server-side PHP script
here is sample.
function AC_FL_RunContent(){
var ret = AC_GetArgs(arguments, ".swf?ts=" + getTS(), "movie",
"clsid:d27cdb6e-ae6d-11cf-96b8-444553540000",
"application/x-shockwave-flash");
AC_Generateobj(ret.objAttrs, ret.params, ret.embedAttrs);
}
function getTS() {
var ts = new Date().getTime();
return ts;
}
AC_OETags.js is file and it exists html-template several places.
but as my posting said, I am facing another type of problem.
The caching is not done by Flash Player but by the browser, so it's out of Adobe's control. I think you have found a workable solution. If I want to avoid caching I usually append a random number on the URL.

Resources