I've recently started embedding JavaScript and CSS files into our common library DLLs to make deployment and versioning a lot simpler. I was just wondering if there is any reason one might want to do the same thing with a web application, or if it's always best to just leave them as regular files in the web application, and only use embedded resources for shared components?
Would there be any advantage to embedding them?
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
I doubting and questioning the validity of Easement's comment about how browsers download JavaScript files. I'm pretty sure that the embedded JavaScript/CSS files are recreated temporarily by ASP.NET before the page is sent to the browser in order for the browser to be able to download and use them. I'm curious about this and I'm going to run my own tests. I'll let you know how it goes....
-Frinny
Of course if anyone who knew what they were doing could use the assembly Reflector and extract the JS or CSS. But that would be a heck of a lot more work than just using something like FireBug to get at this information. A regular end user is unlikely to have the desire to go to all of this trouble just to mess with the resources. Anyone who's interested in this type of thing is likely to be a malicious user, not the end user. You have probably got a lot of other problems with regards to security if a user is able to use a tool like the assembly reflector on your DLL because by that point your server's already been compromised. Security was not the factor in my decision for embedding the resources.
The point was to keep users from doing something silly with these resources, like delete them thinking they aren't needed or otherwise tamper with them.
It's also a lot easier to package the application for deployment purposes because there are less files involved.
It's true that the DLL (class library) used by the pages is bigger, but this does not make the pages any bigger. ASP.NET generates the content that needs to be sent down to the client (the browser). There is no more content being sent to the client than what is needed for the page to work. I do not see how the class library helping to serve these pages will have any effect on the size of data being sent between the client and server.
However, Rjlopes has a point, it might be true that the browser is not able to cache embedded JavaScript/CSS resources. I'll have to check it out but I suspect that Rjlopes is correct: the JavaScript/CSS files will have to be downloaded each time a full-page postback is made to the server. If this proves to be true, this performance hit should be a factor in your decision.
I still haven't been able to test the performance differences between using embedded resources, resex, and single files because I've been busy with my on endeavors. Hopefully I'll get to it later today because I am very curious about this and the browser caching point Rjlopes has raised.
Reason for embedding: Browsers don't download JavaScript files in parallel. You have a locking condition until the file is downloaded.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
Regarding the browser cache, as far as I've noticed, response on WebRecource.axd says "304 not modified". So, I guess, they've been taken from cache.
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
You know that if somebody wants to tamper your JS or CSS they just have to open the assembly with Reflector, go to the Resources and edit what they want (probably takes a lot more work if the assemblies are signed).
If you embed the js and css on the page you make the page bigger (more KB to download on each request) and the browser can't cache the JS and CSS for next requests. The good news is that you have fewer requests (at least 2 if you are like me and combine multiple js and css and one), plus javascripts have the problem of beeing downloaded serially.
Related
I am trying to restrict the user from downloading the page as .html or .aspx file from browser.
Or is there a way to change the content of file if its downloaded?
This is a complex area, with lots of moving parts. The short answer is "there is no way to do this with 100% success; there are a few things you can do which make it harder".
Firstly, you can include JavaScript to disable the right click context menu. This doesn't stop Ctrl+S, but might discourage casual attempts.
Secondly, you can use DRM in the browser (though this is primarily aimed at protecting media content. As browser support is all over the show, this isn't realistic right now.
Thirdly, you could write your site as a single page web application, and build some degree of authentication into the "retrieve content" logic. This way, saving the page to disk wouldn't bring the content along, just the "page furniture". However, any mechanism you include to only download content when you think you should is likely to be easily subverted by anyone who is moderately motivated.
Also, any steps you take to stop people persisting your pages locally are likely to break the caching mechanisms on which the internet depends for performance, so your site would likely be dramatically slower.
No you can't stop them.
Consider how the web actually works here: once the user has visited your website and loaded your page into their browser, they have already downloaded it - the web page was transmitted from your server to their computer and appeared on their screen.
All they have to do then is click the Save button to keep it permanently on their disk. That doesn't involve downloading it again, it just copies the page data from a temporary folder to a permanent one. Of course it's also possible for people to use another HTTP client (i.e. not a browser, but maybe an existing program, or some code they wrote themselves) to visit the URL of your page and save the returned contents.
It's not clear what problem think you would solve by stopping people from saving pages. Saving the page is something done within the browser - you as a site developer don't control the user's browser, so you can't prevent that. And if you stop them from downloading your page in the first place then - by definition - you also stop them from using your website...which kind of defeats the point of having one :-).
If you've got some sort of worry about security, you'll have to clarify exactly what you are concerned about, and maybe you can get advice about a sensible way to deal with it.
We have an ASP MVC 5 applications. We use bundles with optimization enabled by default. But we have heard several times from users, that they get errors, that we think are caused by old versions of user scripts. Their browsers somehow take scripts from cache, despite the fact, that we have edited that script files and bundles should be updated. The worst part of the problem is that we can't imitate or recreate this problem. We don't know how. We already have tried to make test-changes to scripts like adding some "console.log('test')" lines in order to see, if the browser takes the cached version, but everything was ok, the hash in the end of <script src="....?v='hash'"> changed and the browser took the newest version from first time. I should mention, that our site is a single page application. Don't know, maybe its somehow related with the problem.
Have you faced this kind of problem?
There's not enough information here to give a definitive answer. The bundler detects changes in files and will regenerate the bundle along with the link to that bundle, which will include an updated query string param. Since the query string is part of the URI, it's considered a totally different resource at this point, and the browser should fetch it again, because there is technically no cache available. The only logical reason this would not occur is if the HTML with the link to the bundle is not being updated. This can happen if you're using OutputCache or otherwise caching the HTML document. It can also happen if the client's browser is aggressively caching the HTML document. Unfortunately, there's not much you can do about that, as the client browser ultimately has control over what is or is not cached and for how long.
That said, given that this is a single page app, it's very possible that it's also including a cache manifest. This manifest will very often include the HTML file itself, and the browser will not refetch any file in the manifest unless the manifest itself is updated.
I am building a AJAX intensive web application (using ASP.NET, JQuery, and WCF web services) and am looking into building an HTTP Handler that handles script combining and compression for my JavaScript files and my CSS files. I know not combining the scripts is generally a less preferred approach, and I'm sure it's probably the way I will end up going, but my question is this...
Since so many of my JS files are due to the controls I use don't they get cached by the browser after the first load anyway? Since so many of these controls can be found on many of the pages of my web application is it actually faster to combine all of my scripts and serve that one file (which will vary for every page) or to serve the individual files which will get cached? I guess what I'm getting at is, by enabling script combining am I now losing part of the caching ability of the browser? I know I can cache the combined script, but the combined script will be different for every page whereas with the individual control scripts each one will be cached and the number of new scripts will be minimal for each page call.
Does this make any sense? Thoughts?
The fewer number of JS files you serve, the faster your pages will be, due a smaller number of round trips to the server. I would manually put all the common js code into one file (or as few files as possible), all the css code into one file etc., and not worry about using a handler to combine the files. The handler is going to take processing time to combine the files, so you are going to pay that penalty also. You can turn on gzip compression on IIS and have it handle that for you. I would run something like YUI Compressor on the Javascript files used in production.
If the handler changes the file contents from page to page, browsers won't be able to cache it. If you are using SSL this point will be moot though as the browser won't cache the files anyway.
EDIT
I've been corrected some browsers (like FF) can cache SSL content but not all.
As other mentioned: minify, gzip and turn on caching (set expire time and see to that you support etags) on the one static JS file you have and the one static CSS file you have. On top of this it's recommended to load your CSS files as early as possible and your JS files as late as possible (JS file loading is blocking other downloads and it's faster for the browser to render the page if it got the CSS as soon as possible). Sprites also help if you have many small images/icons. Loading static content from sub domains will help the browser to have more simultaneous downloads and you could drop all you cookies for those sub domains to lower the http header size.
You could consult YSlow for performance analysis, it's a great tool!
generally a less preferred
is it for live js? I thinking of JavaScriptMVC which compresses all the code into one file when complied for production, not development... It's a heavy weight I believe.
Usually it's better to combine all scripts. In this case you'll reduce http overhead. Minified controls scripts usually are quite small. In rare case when you are using quite large control you could not combine it to main js.
An architect at my work recently read Yahoo!'s Exceptional Performance Best Practices guide where it says to use a far-future Expires header for resources used by a page such as JavaScript, CSS, and images. The idea is you set a Expires header for these resources years into the future so they're always cached by the browser, and whenever we change the file and therefore need the browser to request the resource again instead of using its cache, change the filename by adding a version number.
Instead of incorporating this into our build process though, he has another idea. Instead of changing file names in source and on the server disk for each build (granted, that would be tedious), we're going to fake it. His plan is to set far-future expires on said resources, then implement two HttpModules.
One module will intercept all the Response streams of our ASPX and HTML pages before they go out, look for resource links and tack on a version parameter that is the file's last modified date. The other HttpModule will handle all requests for resources and simply ignore the version portion of the address. That way, the browser always requests a new resource file each time it has changed on disk, without ever actually having to change the name of the file on disk.
Make sense?
My concern relates to the module that rewrites the ASPX/HTML page Response stream. He's simply going to apply a bunch of Regex.Replace() on "src" attributes of <script> and <img> tags, and "href" attribute of <link> tags. This is going to happen for every single request on the server whose content type is "text/html." Potentially hundreds or thousands a minute.
I understand that HttpModules are hooked into the IIS pipeline, but this has got to add a prohibitive delay in the time it takes IIS to send out HTTP responses. No? What do you think?
A few things to be aware of:
If the idea is to add a query string to the static file names to indicate their version, unfortunately that will also prevent caching by the kernel-mode HTTP driver (http.sys)
Scanning each entire response based on a bunch of regular expressions will be slow, slow, slow. It's also likely to be unreliable, with hard-to-predict corner cases.
A few alternatives:
Use control adapters to explicitly replace certain URLs or paths with the current version. That allows you to focus specifically on images, CSS, etc.
Change folder names instead of file names when you version static files
Consider using ASP.NET skins to help centralize file names. That will help simplify maintenance.
In case it's helpful, I cover this subject in my book (Ultra-Fast ASP.NET), including code examples.
He's worried about stuff not being cached on the client - obviously this depends somewhat on how the user population has their browsers configured; if it's the default config then I doubt you'd need to worry about trying to second guess the client caching, it's too hard and the results aren't guaranteed, also it's not going to help new users.
As far as the HTTP Modules go - in principle I would say they are fine, but you'll want them to be blindingly fast and efficient if you take that track; it's probably worth trying out. I can't speak on the appropriateness of use RegEx to do what you want done inside, though.
If you're looking for high performance, I suggest you (or your architect) do some reading (and I don't mean that in a nasty way). I learnt something recently which I think will help -let me explain (and maybe you guys know this already).
Browsers only hold a limited number of simultaneous connections open to a specific hostname at any one time. e.g, IE6 will only do 6 connections to say www.foo.net.
If you call your images from say images.foo.net you get 6 new connections straight away.
The idea is to seperate out different content types into different hostnames (css.foo.net, scripts.foo.net, ajaxcalls.foo.net) that way you'll be making sure the browser is really working on your behalf.
http://code.google.com/p/talifun-web
StaticFileHandler - Serve Static Files in a cachable, resumable way.
CrusherModule - Serve compressed versioned JS and CSS in a cachable way.
You don't quite get kernel mode caching speed but serving from HttpRuntime.Cache has its advantages. Kernel Mode cache can't cache partial responses and you don't have fine grained control of the cache. The most important thing to implement is a consistent etag header and expires header. This will improve your site performance more than anything else.
Reducing the number of files served is probably one of the best ways to improve the speed of your website. The CrusherModule combines all the css on your site into one file and all the js into another file.
Memory is cheap, hard drives are slow, so use it!
What are some of the disadvantages of using an external JS file over including the JS as a part of the ASPX page?
I need to make an architectural decision and heard from coworkers that external JS does not play nice sometimes.
The only downside that I am aware of is the extra HTTP request needed. That downside goes away as soon as the Javascript is used by two pages or the page is reloaded by the same user.
One con is that the browser can't cache the JS if it's in the page. If you reference it externally the browser will cache that file and not re-download it every time you hit a page. With it embedded it'll just add to the file-size of every page.
Also maintainability is something to keep in mind. If it's common JS it'll be a bit more of a pain to make a change when you need to update X number of HTML files' script blocks instead of one JS file.
Personally I've never run into an issue with external files vs embedded. The only time I have JS in the HTML itself is when I have something to bind on document load specifically for that page.
Caching is both a pro and potentially a con, if you are not handling it properly.
The pro is obvious, as it will improve page loading on every page load past the first one.
The con is that when you release new code, it may still be cached by the user's browser, so they may not get the update. This can easily be solved by changing the name on your js file. We automatically version our js with the file's timestamp, and then make sure that points to the create file in the web request through configuration on our web server (mod_rewrite, Apache).
Ask them to define "play nice". Aside from better logical organization, external js files don't have to be transmitted when already cached.
We use YUI compressor to automatically minify and combine external scripts into one when doing production/staging builds.
The only disadvantage I know is that another request must be made to the server in order to retrieve the external JS file. As was said before me you can use tools like YUI compressor to minimize the effects of this.
The advantage however would be that you can keep all of your JS code in a separate more maintainable format.
Another huge advantage to external javascript is the ability to check your syntax with Jslint. That, added to the ability to minify, combine and cache external scripts, makes internal javascript seem like a poor choice.