We have an ASP MVC 5 applications. We use bundles with optimization enabled by default. But we have heard several times from users, that they get errors, that we think are caused by old versions of user scripts. Their browsers somehow take scripts from cache, despite the fact, that we have edited that script files and bundles should be updated. The worst part of the problem is that we can't imitate or recreate this problem. We don't know how. We already have tried to make test-changes to scripts like adding some "console.log('test')" lines in order to see, if the browser takes the cached version, but everything was ok, the hash in the end of <script src="....?v='hash'"> changed and the browser took the newest version from first time. I should mention, that our site is a single page application. Don't know, maybe its somehow related with the problem.
Have you faced this kind of problem?
There's not enough information here to give a definitive answer. The bundler detects changes in files and will regenerate the bundle along with the link to that bundle, which will include an updated query string param. Since the query string is part of the URI, it's considered a totally different resource at this point, and the browser should fetch it again, because there is technically no cache available. The only logical reason this would not occur is if the HTML with the link to the bundle is not being updated. This can happen if you're using OutputCache or otherwise caching the HTML document. It can also happen if the client's browser is aggressively caching the HTML document. Unfortunately, there's not much you can do about that, as the client browser ultimately has control over what is or is not cached and for how long.
That said, given that this is a single page app, it's very possible that it's also including a cache manifest. This manifest will very often include the HTML file itself, and the browser will not refetch any file in the manifest unless the manifest itself is updated.
Related
I am working on .Net(C#) application. It's a quite old application and has classic Asp pages, Web Forms and MVC pages. And using single page application concept to load all these pages. All of them have different-different CSS and JS based on business logic they have.
Now the problem is, every time after our release we face same issue that is CSS and JS caching. I know there are couple of ways to deal with this issue, the most common is adding a version and change this version with every release. But the problem is we have thousands of such links so updating all of then I don't find it a solution that we should opt.
Another approach what we thought of is, we can have a module that intercept all the resource requests and we update the link for each CSS and JS file and add some version (what I mentioned in above paragraph). But the problem I can see here is, this will make application slow because of checking and executing some string (File path) manipulation code for each resource requests.
I am sure I am not the only one who is facing this problem, so if anyone can share their experience and approach to handle this problem with minimum changes in code.
As #tim suggested adding query string to the files will help you.
Considering the volume of changes required at your end, I think preparing a rewrite rule for .js and .css files (with pattern matching) and injecting the query string to target URL (with permanent redirection) will help you.
solution for similar query is available here
Please note that, with each build you might need to tweak the configuration as necessary.
What might help: it isn’t necessary to include a version number in your files. It is sufficient if a versionnumber exists in your link.
You could use a (fake) querystring for that. So instead of linking to your css or js file like “style.css” you could link to “style.css?version=1.02”. This querystring part is ignored, but the browser thinks it’s a new file.
Maybe you can go through all of your links one time; make them serverside (in case of webforms) and add this versionpart serverside based on the version of your build. Then for your next releases the problem is solved..
Hey I'm working on a site that loads CSS and images that are generated server-side. Some times the images and CSS that is loaded in shows up as the incorrect template but with the correct images.
Since this template is created on the server and not on the actual page I was thinking that the web server that hosts the actual page may have a cached version of that page and may sometimes ignore the CSS and images that are generated from the main server.
In short:
Do webservers sometimes keep cached versions of page Styling?
I there an easy way to make it get the live version always?
Also this happens very infrequently and at random. It seems very hard to replicate. But I have seen it happen a few times.
Any other Ideas?
For the first answer, yes they do but only if setup that way. There is the CDN or varnish. These system are used for website with huge loads where content must be cached locally or on other server, allowing the user to visualize the cached content and not the one generated by the webserver at the moment the user requested.
Exclude this for your case then ;)
I always use chrome or firebug on firefox to debug a website.
Press F12 while in the page you want to check and, on chrome, go to "Network" tab and pin "Disable Cache".
This is incredibly handy if you refresh your page quite often and want the content not cached.
For the question itself, I don't think I/we can help you without seeing the code. But try my suggestion with F12 before.
There are two kinds of caches to think about. One of them is server cache. If you use server cache, then whenever CSS was modified, you need to empty the CSS cache if you have such a cache. If not, then you need to empty the cache, which might be painful.
As about browser cache, if you add a new parameter to your css file, then it will be loaded even if it was cached in the browser of a given user, therefore it is recommendable to add a parameter to your css file where you include it. This parameter should be either a version or a timestamp, or something uniquelly distinguishable from earlier versions. That value should be stored and you need to refresh (preferably) automatically whenever the CSS changes. The exact steps are up to you, since they differ greatly in different environments.
For a long time I've been updating ASP.NET pages on the server and never find the correct way to make changes visible on files like CSS and images.
I know if a append something in the URL the browser will think the file is another one:
<img src="/images/myLogo.png?v=1"/>
or perhaps changing its name:
<img src="/images/myLogo.v1.png"/>
Unfortunately it does not look the correct way. In a case were I'm using App_Themes the files in this folder are automatically injected in the page in a way I can't easily change the URL.
So my question is:
When I'm publishing de ASP.NET Application on the server what is the correct way to signal to IIS (and it notify browser after that) that a file was changed? It is not automatic? Should I change some configuration in IIS or perhaps make some "decoration" in the code?
I've already tried many questions here in SO like "ASP.NET - Invalidate browser cache", "How to refresh the browser cache of an image?", "Handle cached images? How to get the browser to show the new version?", and even "What is an elegant way to force browsers to reload cached CSS/JS files?" but none of them actually take another aproach else in a way you must handle it manually in the code instead of IIS or ASP.NET configuration.
The closer I could find is "Asking browsers to cache our images (ASP.NET/IIS)" where they set expiration but not based on the fact the files were update. Instead they used days or hour to cache those file so they would updated even when no changes were made.
I'm want to know if IIS or ASP.NET offers something related to this, automatically send to the browser that the files was changed. Is it possible/built in?
The options you have to update the browser side, cached item are:
Change the file name
Add url parameter
Place it on cache for a limited time (eg for couple of hours)
Compare the date-time of creation.
Signaling with eTag.
With the three two you avoiding one server call for each item, but the third option load it again after some time.
With the others you have to make one call to the server to see if needs to be load it again.
So you can not have all here, there is not correct way, and you need to chose what is the best for you, and what you can do. The faster from client perspective is the (1) and (2) options.
The direct answer to your question is to use eTag, or date-time compare of the file creation, but you loose that way, a call to the server, you only win the size of what is travel back.
Some more links:
http eTag
How do I support ETags in ASP.NET MVC?
Configuring ETags with Http module in asp.net
How to control web page caching, across all browsers?
Jquery getScript caching
and you can find even more.
I do not really understand how Google Code handles file versioning.
I am building a jQuery plugin that anyone can access. Like so:
<script type="text/javascript" src="http://jquery-old-browser-warning.googlecode.com/files/jquery.browser-warning.js"></script>
This script accesses other files on the same project (via ajax).
The problem is, that when I upload a new file, it just seems like there aren't any changed to it. Google recommends that new files should have new names.
But then I would have to change the filenames that the script loads.
But then I would have to change the script file as well, and that would break everybodys implementation (with the script-tag above)
Is there a way to force a file to change when uploading with the same filename?
PS: If I go directly to the project page's file list. Then I do get the file with the updated content. But as I said, not when getting it through ajax.
The cheapest trick in the book to prevent caching is adding some random content to a GET parameter:
www.example.com/resources/resource.js?random=1234567
You can for example use the current timestamp for this.
This, however, causes any and every access to re-fetch the content, and invalidates any client-side caching mechanism as well. I would use this only as a last resort. If Google are that stringent about caching, I'd rather develop a workflow that allows for easy renaming of files.
I don't know your workflow, but maybe you can work with versioned directories?
Like so:
www.example.com/50/resources/resource.js
www.example.com/51/resources/resource.js
that would keep whatever caching the client employs intact, but whenever there's a change from your end, the browser would reload the content.
I think Its just a cache on the browsers, So when you request file from ajax, just add random parameters or version number.
For example, Stackoverflow add version parameter to static contents like
http://sstatic.net/so/all.css?v=6638
Are you talking about uploading files to the "Downloads" area? Those should have distinct filenames, for example they should be versioned. If you're uploading the script code, that should be submitted by the version control system you're using, and should most definitely keep the same name across revisions.
Edit: your code snippet didn't show up on my page, misunderstood what you're trying. Don't imagine Google would be happy with you referencing the SVN repository every time some client page is loaded :)
I've recently started embedding JavaScript and CSS files into our common library DLLs to make deployment and versioning a lot simpler. I was just wondering if there is any reason one might want to do the same thing with a web application, or if it's always best to just leave them as regular files in the web application, and only use embedded resources for shared components?
Would there be any advantage to embedding them?
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
I doubting and questioning the validity of Easement's comment about how browsers download JavaScript files. I'm pretty sure that the embedded JavaScript/CSS files are recreated temporarily by ASP.NET before the page is sent to the browser in order for the browser to be able to download and use them. I'm curious about this and I'm going to run my own tests. I'll let you know how it goes....
-Frinny
Of course if anyone who knew what they were doing could use the assembly Reflector and extract the JS or CSS. But that would be a heck of a lot more work than just using something like FireBug to get at this information. A regular end user is unlikely to have the desire to go to all of this trouble just to mess with the resources. Anyone who's interested in this type of thing is likely to be a malicious user, not the end user. You have probably got a lot of other problems with regards to security if a user is able to use a tool like the assembly reflector on your DLL because by that point your server's already been compromised. Security was not the factor in my decision for embedding the resources.
The point was to keep users from doing something silly with these resources, like delete them thinking they aren't needed or otherwise tamper with them.
It's also a lot easier to package the application for deployment purposes because there are less files involved.
It's true that the DLL (class library) used by the pages is bigger, but this does not make the pages any bigger. ASP.NET generates the content that needs to be sent down to the client (the browser). There is no more content being sent to the client than what is needed for the page to work. I do not see how the class library helping to serve these pages will have any effect on the size of data being sent between the client and server.
However, Rjlopes has a point, it might be true that the browser is not able to cache embedded JavaScript/CSS resources. I'll have to check it out but I suspect that Rjlopes is correct: the JavaScript/CSS files will have to be downloaded each time a full-page postback is made to the server. If this proves to be true, this performance hit should be a factor in your decision.
I still haven't been able to test the performance differences between using embedded resources, resex, and single files because I've been busy with my on endeavors. Hopefully I'll get to it later today because I am very curious about this and the browser caching point Rjlopes has raised.
Reason for embedding: Browsers don't download JavaScript files in parallel. You have a locking condition until the file is downloaded.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
Regarding the browser cache, as far as I've noticed, response on WebRecource.axd says "304 not modified". So, I guess, they've been taken from cache.
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
You know that if somebody wants to tamper your JS or CSS they just have to open the assembly with Reflector, go to the Resources and edit what they want (probably takes a lot more work if the assemblies are signed).
If you embed the js and css on the page you make the page bigger (more KB to download on each request) and the browser can't cache the JS and CSS for next requests. The good news is that you have fewer requests (at least 2 if you are like me and combine multiple js and css and one), plus javascripts have the problem of beeing downloaded serially.