I'm using ASP .NET 3.5 and looking a way to bundle a bunch of my scripts. I came across ScriptManager's CompositeScript element. Is it a good way solution to use for bundling? Does it have any ramifications etc?
Pros and Cons, Traps are similar for other script bundling solutions-- you will want to minimize first, pay attention to order of scripts, start your scripts with a ; to close off any unclosed scripts in another file.
One ASP.NET specific issue is the debug/development experience. If you combine your scripts, it is much more difficult to find your code in the IE debugger, the script will have a machine generated name that looks similar to other framework generated scripts & your code will be buried in a much larger file.
So I register my references in code behind and wrap them in ifdef DEBUG/endif and ifdef RELEASE/endif (be sure to define a RELEASE in the project properties, it doesn't happen by default if you use this trick). In the RELEASE version, I bundle all the scripts and the DEBUG version leave the files separate.
Also per Microsoft's recommendation, script bundling works best for files that you need throughout the website. If you have a multipage site with A, B, C and your users normally visit only one of them, then bundling the files for A,B,C will give the user 2 extra files. I think this is a bad micro-optimization because most apps have small javascript files & large libraries, so a website's worth of JS bundled is not enough bytes to worry about, unless you have a lot of traffic.
Finally, the server side ScriptManager doesn't offer any way to defer scripts or dynamically trigger a load from the client side (other than load scripts after UI), I use LAB.js to dynamically load scripts later... this sometimes can allow you to defer a script until you know you need it and possibly defer loading that script forever. Once you bundle that script, it will be loaded for each user if they turn out to need it or not..
Part 2
Another gotcha, at least for me, is that while you can enable caching of JS files in web.config (no time to look up syntax at the moment!) and you can also enable caching at the IIS level using the expires header, the ScriptManager does nothing to help you "bust" the cache when a new version comes out. Ideally, a script management tool would let trick the browser into thinking the script is in a folder that changes as the last update changes, so that scripts could be client side cached for a year.
I wish I had info on if the scripts are server side cached-- I would guess they are not. But because the user gets the script usually once per day at most -- on my server they seem to cache for 24 hours, it isn't too interesting if the scripts are regenerated on each request.
And finally, if you are using a CDN for things like jquery, (depends on if you are public or intranet) it is the 4.0 and 4.5 version that makes it easier to tell the ScriptManager to use a CDN and fallback when the CDN is down.
use sumfile.js?n={0}
where {0} is the number os your building
Related
I am finding old CSS code on my browser instead it has the new code in server files. I can see the minified CSS code on browser when application gets loaded up.
My team is using Liferay framework which seems to minify the CSS files. I am noob in liferay.
I found portal-ext.properties under liferay-portal-6.1.10-ee-ga1/tomcat-7.0.25/webapps/ROOT/WEB-INF/classes/ but didn't found any parameter for minifying the CSS files?
You shouldn't just update any CSS code you find deployed on the server. Instead, update your theme, build it (it builds to a web application) and deploy it to the server. This will take care of minification and updating caches. Just changing random files in the realm of the webserver will not.
For this you'll have to find the source files for your theme (or a whole plugins-sdk, which might contain that theme). Look at your organization's source control system, which is where I'd expect it.
Explaining how to update and build themes goes well beyond the scope of a single answer here, if these are really your first steps it might be worth getting some help from someone who has some experience with Liferay.
In my ASP.NET MVC application, I want to update the contents of a css or js file which is embedded inside my dll at runtime, without restarting the application.
My application takes a long time to restart, so in development I'd rather not wait for minutes at a time to see changes in js or css.
I think Embedded resources are not meant to be changed in the runtime at all. It is almost same as you cannot modify the bytes (compiled from your source code) within your assembly at runtime. You may consider a different architecture for your Application so that you won't need to update your embedded resource at runtime. Especially for JS and CSS, they can be added in the runtime and they can be served by your server without any need to be embedded.
Anyway, I understand that you may have the need to embed the resource and so, here is the link I found, maybe useful for you: Programmically embed resources in a .NET assembly
I have faced a lot of issues with Publishing like when you need to make small changes on the code, sometimes the generated DLL file (the dll file for example of default.aspx.CS when published) cannot be recognized by IIS saying the codebehind is wrong or something. Sorry for not remembering the exact error message. I am hoping you know what I mean at this point.
Therefore, I usually do a simple Copy Paste operation instead of Publishing.
Could you tell me what am I missing by NOT using the Publish method? How is Publishing better? Or which one do you prefer, why?
Basically its a pros and cons situation.
Thankyou
Well, it depends on what you mean by "copy":
With Publishing you have options to pre-compile all or part of your application. You can publish to a local folder in your file system (instead of your target/host) and then copy the updated file(s) (only). If you are making "code behind" (c#/vb code) changes, this means you'll likely only need to "copy"/overwrite dlls. Goes without saying that if you've made "content" changes (html/razor/script/etc) changes, then you'd need to copy/overwrite those as well.
If you're new to deployment, you may find yourself simply copying/overwriting "everything" which is the safest way to go. Once you get more experience, you'll "recognize" which assets you only need to update (one or a few dlls and or content code, instead of "everything"). There's no magic to this, usually, its a matter of just looking at the timestamp of the dll/file after you've published (locally) or rebuild your web application.
I'd recommend doing a local publish so you can see what is actually needed on your server. The files published to your local file system/folder is what needs to be on your host/server. Doing so will visualize and remove whatever "mystery" there is to Publishing:
you'll see what is actually needed (on your server) vs. what's not
you'll see the file timesstamps which will help you recognize what files were actually changed vs those that weren't (and therefore don't need to be updated).
once you get the hang of it, you will not need to "copy"/ftp "everything" and just update files that were actually modified (only).
So "copy" can mean the above, or if you are saying you will simply copy all of your development code (raw (vb/cs)html/cs/vb) to your host, then that means your site will be dynamically compiled as each resource is needed/requested (nothing is pre-compiled). Also "easy" but you do lose pre-compilation which means there is a delay when each of your web pages are requested/needed (ASP.net needs to dynamically compile). Additionally, you are also exposing your source code on the server. It may not mean much depending on your situation, but it is one more thing to consider.
Here's more info on pre-compilation and options.
Assuming we consider an aspx page and its aspx.cs code behind file, there are three alternative ways of deploying your site:
You can copy both to iis. The aspx will be compiled to .cs upon the first request and then both .cses will be compiled to a temp .dll
You can "publish" to iis, this will compile the code behind class to .dll but will copy the aspx untouched. The aspx will be translated to .cs and then to .dll upon the first request
You can "publish" the site and then manually precompile it with the aspnet_compiler. Publishing will compile the code behind to .dll as previously but then precompilation will clear out your .aspx files by removing their content and moving the compiled code to yet another .dll.
All three models have their pros and cons.
First one is the easiest to update incrementally but in the same time is the most open to unwanted modifications.
Second is also easy, can be invoked from vs, it closes the possibility of some unwanted modifications at the server but .aspxses still need time to compile upon the first request
Third takes the time and some manual actions but prevents any changes and also speeds up the warm up of the site as the compilation of assets is not necessary. It is great for shared environments.
We have several fairly large JavaScript files embedded into a single script resources DLL. This is then consumed by multiple projects by way of a reference and page includes via the ASP.NET script manager. This keeps things nice and neat within our ASP.NET pages and requires very little work to integrate into new projects.
The problem is that some of these script files are quite larger (approx 100KB) and take time to download. By running minify on them before embedding this is reduced down a lot (around 70KB) but not enough. What we would like to do is GZIP the files before they are embedded. However, just gzipping the files causes syntax errors as the content is not unzipped. There is a content type "text/javascript" applied in AssemblyInfo when the resource is embedded but we can't find a way to specify content-encoding.
Is there any way to make this work without having to write a httpmodule/handler (which would mean changing the config for all consuming projects)?
Okay, so it looks, from many different attempts, an absence of answers and a lot of Google searching, that the HttpModule is the only way to approach this. In an attempt to keep this easy to configure I've setup an HttpModule inside the same dll containing the script files as below.
Simplified DLL Structure
\ScriptMinified\*.js [Embedded Resource] (Minified Only)
\ScriptCompressed\*.gz [Embedded Resource] (Gzipped and Minified)
\ScriptDebug\*.js [Embedded Resource] (Raw uncompressed and commented)
MyScriptManager.cs
MyHttpModule.cs
The only additional work is an entry in the consumer's web config to enable the module. Plus I've made the initialize call, in MyScriptManager, that includes the script tags, detect the presence/mode of the new http module and serve gziped, debug or minimized versions as required. This means we don't have to recode or configure any old projects for them to work so achieves much the same result.
We have a couple of large solutions, each with about 40 individual projects (class libraries and nested websites). It takes about 2 minutes to do a full rebuild all.
A couple of specs on the system:
Visual Studio 2005, C#
Primary project is a Web Application Project
40 projects in total (4 Web projects)
We use the internal VS webserver
We extensively use user controls, right down to a user control which contains a textbox
We have a couple of inline web projects that allows us to do partial deployment
About 120 user controls
About 200.000 lines of code (incl. HTML)
We use Source Safe
What I would like to know is how to bring down the time it takes when hitting the site with a browser for the first time. And, I'm not talking about post full deployment - I'm talking about doing a small change in the code, build, refresh browser.
This first hit, takes about 1 minute 15 seconds before data gets back.
To speed things up, I have experimented a little with Ram disks, specifically changing the <compilation> attribute in web.config, setting the tempDirectory to my Ram disk.
This does speed things up a bit. Interestingly though, this totally removed ALL IO access during first hit from the browser.
Remarks
We never do a full compile during development, only partial. For example, the class library being worked on is compiled and then the main site is compiled which then copies the binaries from the class library to the bin directory.
I understand that the asp.net engine needs to parse all the ascx/aspx files after critical files have been changed (bin dir for example) but, what I don't understand is why it needs to do that when only one library dll has been modified.
So, anybody know of a way to either:
Sub segment the solutions to provide faster first hit or fine tune settings in config files or something.
And, again: I'm only talking about development, NOT production deployment, so doing the pre-built compile option is not applicable.
Thanks, Ruvan
Wow, 120 user controls, some of which only contain a single TextBox? This sounds like a lot of code.
When you change a library project, all projects that depend on that library project then need to be recompiled, and also every project that depends on them, etc, all the way up the stack. You know you've only made a 1 line change to a function which doesn't affect all of your user controls, but the compiler doesn't know that.
And as you're probably aware ASPX and ASCX files are only compiled when the web application is first hit.
A possible speed omprovement might be gained by changing your ASCX files into Composite Controls instead, inside another Library Project. These would then be compiled at compile time (if you will) rather than at web application load time.