We have several fairly large JavaScript files embedded into a single script resources DLL. This is then consumed by multiple projects by way of a reference and page includes via the ASP.NET script manager. This keeps things nice and neat within our ASP.NET pages and requires very little work to integrate into new projects.
The problem is that some of these script files are quite larger (approx 100KB) and take time to download. By running minify on them before embedding this is reduced down a lot (around 70KB) but not enough. What we would like to do is GZIP the files before they are embedded. However, just gzipping the files causes syntax errors as the content is not unzipped. There is a content type "text/javascript" applied in AssemblyInfo when the resource is embedded but we can't find a way to specify content-encoding.
Is there any way to make this work without having to write a httpmodule/handler (which would mean changing the config for all consuming projects)?
Okay, so it looks, from many different attempts, an absence of answers and a lot of Google searching, that the HttpModule is the only way to approach this. In an attempt to keep this easy to configure I've setup an HttpModule inside the same dll containing the script files as below.
Simplified DLL Structure
\ScriptMinified\*.js [Embedded Resource] (Minified Only)
\ScriptCompressed\*.gz [Embedded Resource] (Gzipped and Minified)
\ScriptDebug\*.js [Embedded Resource] (Raw uncompressed and commented)
MyScriptManager.cs
MyHttpModule.cs
The only additional work is an entry in the consumer's web config to enable the module. Plus I've made the initialize call, in MyScriptManager, that includes the script tags, detect the presence/mode of the new http module and serve gziped, debug or minimized versions as required. This means we don't have to recode or configure any old projects for them to work so achieves much the same result.
Related
Recently I've made changes to an application which had 20 different .js files being referenced, to using bundling to bring this down to one file.
This includes:
Frameworks (jQuery library)
Custom logic (global logic for header navigation etc)
However after deployment this showed that we we're no longer making use of asynchronous downloading of javascript files.
Therefore our waterfall effect was replaced by one long download.
Has anyone ran into this problem before? Are there any guidelines for bundling which suggest multiple over single?
It depends on your application. Unless its a single page application I will prefer to separate the files into multiple js files based on functionality. If you have multi-page application instead of forcing client to download bulky large one file, separate them based on usage. Club together only those which are used by most of pages otherwise keep them separate based on functionality.
I'm using ASP .NET 3.5 and looking a way to bundle a bunch of my scripts. I came across ScriptManager's CompositeScript element. Is it a good way solution to use for bundling? Does it have any ramifications etc?
Pros and Cons, Traps are similar for other script bundling solutions-- you will want to minimize first, pay attention to order of scripts, start your scripts with a ; to close off any unclosed scripts in another file.
One ASP.NET specific issue is the debug/development experience. If you combine your scripts, it is much more difficult to find your code in the IE debugger, the script will have a machine generated name that looks similar to other framework generated scripts & your code will be buried in a much larger file.
So I register my references in code behind and wrap them in ifdef DEBUG/endif and ifdef RELEASE/endif (be sure to define a RELEASE in the project properties, it doesn't happen by default if you use this trick). In the RELEASE version, I bundle all the scripts and the DEBUG version leave the files separate.
Also per Microsoft's recommendation, script bundling works best for files that you need throughout the website. If you have a multipage site with A, B, C and your users normally visit only one of them, then bundling the files for A,B,C will give the user 2 extra files. I think this is a bad micro-optimization because most apps have small javascript files & large libraries, so a website's worth of JS bundled is not enough bytes to worry about, unless you have a lot of traffic.
Finally, the server side ScriptManager doesn't offer any way to defer scripts or dynamically trigger a load from the client side (other than load scripts after UI), I use LAB.js to dynamically load scripts later... this sometimes can allow you to defer a script until you know you need it and possibly defer loading that script forever. Once you bundle that script, it will be loaded for each user if they turn out to need it or not..
Part 2
Another gotcha, at least for me, is that while you can enable caching of JS files in web.config (no time to look up syntax at the moment!) and you can also enable caching at the IIS level using the expires header, the ScriptManager does nothing to help you "bust" the cache when a new version comes out. Ideally, a script management tool would let trick the browser into thinking the script is in a folder that changes as the last update changes, so that scripts could be client side cached for a year.
I wish I had info on if the scripts are server side cached-- I would guess they are not. But because the user gets the script usually once per day at most -- on my server they seem to cache for 24 hours, it isn't too interesting if the scripts are regenerated on each request.
And finally, if you are using a CDN for things like jquery, (depends on if you are public or intranet) it is the 4.0 and 4.5 version that makes it easier to tell the ScriptManager to use a CDN and fallback when the CDN is down.
use sumfile.js?n={0}
where {0} is the number os your building
I have a big ASP.NET application (legacy) which actually (functionally) is composed from two portals. So I need to split it to two separate applications, to ease the development on each of them.
Of course there are shared features between the two. Some of them are in DAL and BL, and that is not an issue - all that code was separate din separate projects, which made up assemblies that are to be referenced in both apps.
But the problem is with some pages, lot of user controls, some css and javascript files, which are shared between the two "portals" (applications).
I'd like to ask for some advice on how to handle them. My main concern is to avoid duplication, so ideally they should stay in a single place, and be used by both apps.
First I tried was to add files from one project to the other as linked files. While this works for code file (they get built into the project they are linked to), it doesn't for aspx / ascx or css / javascript / images. It does if I publish first (if marked as content, they get copied during publish), but I can't do this all the time during development, and such files are not found when app is debugged / run from source code (sincve, obvious the linked files are not actually available in app file tree, when one is looking for any of them.
Another thought was to create pre-build event, and in that to copy all shared files from a common location.
e.g. I create a project Common and put there all files that are shared between applications, organized on folders, and on pre-build I perform an xcopy.
And another thoughts is to make all shared files part of a SVN repository which I reference with svn:external, in both projects.
But all looks to my little cumbersome. Does anyone had similar situation? How did you handled it?
Any advice on any of my suggestions?
You have, at least, two options :
sharing through virtual directories : https://stackoverflow.com/a/13724316/1236044
create user control libraries : https://stackoverflow.com/a/640526/1236044
The virtual directories approach seems straightforward for ressources like css, js, images.
I also tend to like it for sharing user controls.
The library approach should need more work, but would ensure better reusability of the controls on the long run.
I had an identical problem this week with css and javascript files triplicated across three legacy projects.
I removed the files from two of the projects and replaced them with linked files to the first project, but when I ran the website I got 404 errors for css & javascript files missing in the pages belonging to the two projects.
So I simply added the nuget package 'MSBuild.WebApplication.CopyContentLinkedFiles' to my solution and everything worked fine - the css and javascript files were deployed fine for the two projects and my 404 errors disappeared.
I didn't have any shared .aspx / .ascx files, but I would imagine it will work for them too.
See also this question / answer.
I'm considering using LESS for CSS development with server (or development) side processing, but I can't decide if I should keep the generated CSS files in version control. There are plenty of solutions with hooks, but this adds software dependencies to the server. A hook could just be added locally so staging and production areas on the web would get the same files. So, the question is:
Should generated CSS files be included in version control or not? Please keep in mind that some frameworks require a CSS file to exist for a particular reason (i.e. WordPress themes require a style.css file in order to be recognized).
When I say 'considering using LESS', I mean it becomes a requirement. New developers would not have the option use vanilla CSS after the choice is in favor of LESS.
Checking in derived artifacts is almost always sub-optimal.
I vote no to checking in the .css. It's only a matter of time until one of your peers or successors checks in an edit to the .css and not the .less. Then when you change the .less the prior change is lost.
You've pretty much answered your own question. It depends on how you deploy your website.
If the server is just going to pull directly from the Git repository:
1) It needs to have software installed to generate CSS from LESS.
2) or you need to include the CSS files in the repository.
If you're not pulling straight from the repository on your web server, you could have a build script that pulls from git, generates CSS, and then transfers the content to the web server(s), possibly excluding unnecessary files from the transfer.
In my opinion, Git should be used to keep all of the source for a project, and none of the "derived artifacts" (as mentioned by #thekbb). Developers need to have all tools installed to generate those derived artifacts during development and test. For deployment to test and production servers, an automated build server should take the source and create just the files needed for distribution.
In the case of software development, you'd have a Makefile with .C and .H files (for example) in your Git repository. Developers and the build server have a compiler installed that will create an executable or compiled library. When the files are packaged for distribution, the source code is not a part of the archive.
For web development, you have source files like original graphics, HTML templates and LESS files. Developers and the build server can run scripts to generate the site assets (CSS from LESS files, static HTML pages from templates, flattened images in multiple sizes/formats, etc.) When the build server deploys new builds, it copies just the files needed by the server, excluding the source graphics, templates and LESS files.
If there are people that need to review the site content, they should do it on a staging server. If that's not possible, the automated build server can create a ZIP file on an internal server that they can download for review.
Should generated CSS files be included in version control or not?
In theory they should not, but for practicality, I do usually checks in the generated css file. The reason is that it simplifies deployment since I do deployment using git; I would not need to have a less compiler installed on the server and usually not even on the machine I'm deploying from (as opposed to the machine where I'm developing on). Doing this could be useful if you have separate developer and deployer, but can sometimes also be useful even if you're deploying yourself.
Now, there are drawbacks on doing this:
You can't use git add --patch (or you really need to be very careful when doing so)
You should not modify the .css directly; instead I usually use a secondary .css file to do minor modifications without modifying the primary .less or .css file. You can also compile .less file straight into a minified css, to make it less tempting to modify the generated css.
Developers have to set up their machine to use automatic recompile tool (like SimpLess or Less.app), so the .css file is updated as soon as they save to the .less file. Without automation, you'll run into risk of the css not matching the checked-in less file.
I would not do the same when compiling from .C and .H files though, because the generated binary for those are platform specific, and also .less/.css file is usually a very small part of a larger web project so the space overhead of the additional file is small.
Good question. If you can absolutely guarantee that the CSS file gets updated when the LESS gets updated then perhaps yes - as per #Scott Simpson's comment. I suspect that this would be difficult to guarantee and what happens when the new developer get's a copy of CSS the day when they are out of sync? Also, of course, and I hadn't originally thought of this, what happens if the new developer then makes updates to the CSS file rather than the LESS?? If the CSS has to be built and isn't part of archive I can see less problems.
I would say yes -- because what happens if you want to add a developer to your workflow and they don't want or need to build .Less? It would be helpful for them to have access to only the generated file.
We have so, so many RESX files in our ASP.NET 3.5 web application (for localisation purposes) and it's making code changes very slow; every time it runs for the first time all the symbols are built for the re-compiled RESX files.
Ideally I'd like to store these files in an external assembly and create a ResourceProvider that acts as a bridge.
That way the RESX files won't be affected by each subsequent compilation of the web app.
I also don't want to reinvent the wheel; someone must have done this before - but I can't find anything on it!
Actually it looks like reading through this article is going to be a good bet:
http://msdn.microsoft.com/en-us/library/aa905797.aspx
But I'll still hold out in case someone has an example!!