What is faster and why? - asp.net

For maintainability reasons, I want to database drive my javascript, in that I only want to send the javascript which is needed based on the users options.
So, is it faster/less resource heavy to have links in the database pointing to javascript files, then using response.writefile to embed those files into the clientside page, or is it faster/less resource heavy to stick the javascript script straight into the database, and response.write it onto the screen as and when needed?

So, is it faster/less resource heavy to have links in the database pointing to javascript files
Usually yes.
External Javascript files can be cached by the browser, and will be loaded only once. Javascript code injected into the HTML page needs to be loaded every time, which increases bandwidth and loading times.
Also, having JavaScript code in a static file (instead of a dynamic one that fetches the code from the database on every request) is bound to be the fastest and least resource-intensive solution.

Don't use Response.Write.
Be aware that if you send the entire JS file once, the client will / should cache it so it doesn't have to be sent again.
DB lookups for every page just to get the relevant JS will be slow.

By only sending the links to the javascript to the client a seperate HTTP request will have to be created in order to go and get each file.
If you embed only the required javascript directly into the page when needed this prevents this. Look at jQuery, thats got a load of javascript available to the client that isn't being used.
Less HTTP requests generally results in a faster website so I would go with embedding the code directly.
Edit:
I disagree with the caching answers above. If you ensure only the smallest amount of javascript that is required is embedded then my answer should be faster. This page will be cached too!

Related

How are .ascx files cached?

I have an asp.net project, and I load a base aspx page to display to the user. Then I ajax in the results of an ascx component and inject it via innerHTML in javascript.
I have noticed that the ascx component loads slowly on the first page load, but instantly thereafter. This is really cool, but I do not understand how this can be cached, as the contents are generated by making several db calls.
Does the server send some kind of hash to compare the contents to, to see if it changed on the server or not? Is this a browser thing or an asp.net thing?
What you are experiencing is most likely just in time compiling and has very little to do with the user control itself.
Watch the performance monitor counters for .net. This will tell you a lot about what's going on.
Can you use Firebug/IE developer console/etc. to determine the response code? If you can check the headers, you should be able to see a date that indicates either a cache time or a last modified date. I poked through the MS ASP.NET Ajax documentation, but couldn't find any references to default caching times or cache modification.
However, jQuery's ajax function uses an ifModified property which (according to the documentation) checks the Last-Modified header to determine whether or not it should retrieve the results. I'd imagine that the ASP.NET Ajax calls work in a similar fashion. It may not be prudent for your current project, but jQuery makes it very easy to set caching options.

Project hosting on Google Code. Files are cached?

I do not really understand how Google Code handles file versioning.
I am building a jQuery plugin that anyone can access. Like so:
<script type="text/javascript" src="http://jquery-old-browser-warning.googlecode.com/files/jquery.browser-warning.js"></script>
This script accesses other files on the same project (via ajax).
The problem is, that when I upload a new file, it just seems like there aren't any changed to it. Google recommends that new files should have new names.
But then I would have to change the filenames that the script loads.
But then I would have to change the script file as well, and that would break everybodys implementation (with the script-tag above)
Is there a way to force a file to change when uploading with the same filename?
PS: If I go directly to the project page's file list. Then I do get the file with the updated content. But as I said, not when getting it through ajax.
The cheapest trick in the book to prevent caching is adding some random content to a GET parameter:
www.example.com/resources/resource.js?random=1234567
You can for example use the current timestamp for this.
This, however, causes any and every access to re-fetch the content, and invalidates any client-side caching mechanism as well. I would use this only as a last resort. If Google are that stringent about caching, I'd rather develop a workflow that allows for easy renaming of files.
I don't know your workflow, but maybe you can work with versioned directories?
Like so:
www.example.com/50/resources/resource.js
www.example.com/51/resources/resource.js
that would keep whatever caching the client employs intact, but whenever there's a change from your end, the browser would reload the content.
I think Its just a cache on the browsers, So when you request file from ajax, just add random parameters or version number.
For example, Stackoverflow add version parameter to static contents like
http://sstatic.net/so/all.css?v=6638
Are you talking about uploading files to the "Downloads" area? Those should have distinct filenames, for example they should be versioned. If you're uploading the script code, that should be submitted by the version control system you're using, and should most definitely keep the same name across revisions.
Edit: your code snippet didn't show up on my page, misunderstood what you're trying. Don't imagine Google would be happy with you referencing the SVN repository every time some client page is loaded :)

Is there any way to "peek" at a file while it's uploading through HTTP onto a Windows box?

I need to add a file upload function to an ASP.NET website and would like to be able to read a small portion of the file on the server while it's still uploading. A peek or preview type function so I can determine contents and give some feedback to the user while it is still uploading (we're talking about large files here). Is there any way to do this? I'm thinking worst case of writing a custom control which uploads only a fixed number of bytes of the file once chosen and then under the covers starts another upload of the full file. Not totally sure even this is possible, but I'm looking for a more elegant solution anyway... Thanks!
It sounds like you want to avoid the "white screen of death" during large file uploads. If so, you might want to look into Telerik's RadUpload control , which provides a progress bar during upload.
If you want to roll your own, I'd decompile their trial copy for ideas. I've peeked at their source in this way, and they accomplish the progress bar through a combination of a custom HttpModule and HttpHandler along with their control. The handler routes the file in a streamed fashion while the module provides "percent complete" information--or the other way round; it's been a few years since I looked at it.
Edit:
Actually, I'm trying to do server-side processing as the file is still being uploaded. I want to import user data via HTTP, but want to present the user with preview/options of how we'll process their data while the file is still uploading (column definitions, etc.). No matter what, we'll take the file as is, so the upload doesn't need to be interrupted. Given that I actually want interaction during the upload based on reading a relatively small portion of the file as it is being uploaded, would you still recommend the same approach?
Well... it'd be very difficult to do, and it might not work cross-browser, but it could be done with this approach.
Since it's entirely possible to work with the incoming file as a stream as I mentioned, you could have your intial processing update some state as part of that stream processing. If you don't process as a stream, you have to wait for the full file upload before you can do anything with it.
The problem is this: during the file upload, you cannot have any more HTML-based interaction. The post must continue unabated or the upload will fail. The control I linked only works at all because most browsers allow javascript to continue to execute and update page DOM during the post.
So in order to make this work, you have to update some standardized state server-side during your file processing in the HttpModule, which is transmitted back to the client via XmlHttpRequest calls handled by the HttpHandler. You have to use pure javascript/DOM to update the UI for the user.
So, as I said, it's complex and likely to be buggy cross-browser, but it could theoretically be done.
There are alernatives that might be more stable, but might not necessarily be feasible: you could build an ActiveX control or a Click-Once .NET application that pre-processes the file before upload, and maybe even asynchronously transfers the file while the user continues browsing. Some users may not like that option, and I don't know the particulars of your deployment scenario, but it's an option.
There is an HTTP HEAD method but not PEEK.
HEAD will give you information and headers about the file.
Of course you can make a special request handler that does anything you want. You don't have to work with static resources, you can dynamically create any response you want.

Javascript Script Combining and Caching

I am building a AJAX intensive web application (using ASP.NET, JQuery, and WCF web services) and am looking into building an HTTP Handler that handles script combining and compression for my JavaScript files and my CSS files. I know not combining the scripts is generally a less preferred approach, and I'm sure it's probably the way I will end up going, but my question is this...
Since so many of my JS files are due to the controls I use don't they get cached by the browser after the first load anyway? Since so many of these controls can be found on many of the pages of my web application is it actually faster to combine all of my scripts and serve that one file (which will vary for every page) or to serve the individual files which will get cached? I guess what I'm getting at is, by enabling script combining am I now losing part of the caching ability of the browser? I know I can cache the combined script, but the combined script will be different for every page whereas with the individual control scripts each one will be cached and the number of new scripts will be minimal for each page call.
Does this make any sense? Thoughts?
The fewer number of JS files you serve, the faster your pages will be, due a smaller number of round trips to the server. I would manually put all the common js code into one file (or as few files as possible), all the css code into one file etc., and not worry about using a handler to combine the files. The handler is going to take processing time to combine the files, so you are going to pay that penalty also. You can turn on gzip compression on IIS and have it handle that for you. I would run something like YUI Compressor on the Javascript files used in production.
If the handler changes the file contents from page to page, browsers won't be able to cache it. If you are using SSL this point will be moot though as the browser won't cache the files anyway.
EDIT
I've been corrected some browsers (like FF) can cache SSL content but not all.
As other mentioned: minify, gzip and turn on caching (set expire time and see to that you support etags) on the one static JS file you have and the one static CSS file you have. On top of this it's recommended to load your CSS files as early as possible and your JS files as late as possible (JS file loading is blocking other downloads and it's faster for the browser to render the page if it got the CSS as soon as possible). Sprites also help if you have many small images/icons. Loading static content from sub domains will help the browser to have more simultaneous downloads and you could drop all you cookies for those sub domains to lower the http header size.
You could consult YSlow for performance analysis, it's a great tool!
generally a less preferred
is it for live js? I thinking of JavaScriptMVC which compresses all the code into one file when complied for production, not development... It's a heavy weight I believe.
Usually it's better to combine all scripts. In this case you'll reduce http overhead. Minified controls scripts usually are quite small. In rare case when you are using quite large control you could not combine it to main js.

Cons of external JavaScript file over inline JavaScript

What are some of the disadvantages of using an external JS file over including the JS as a part of the ASPX page?
I need to make an architectural decision and heard from coworkers that external JS does not play nice sometimes.
The only downside that I am aware of is the extra HTTP request needed. That downside goes away as soon as the Javascript is used by two pages or the page is reloaded by the same user.
One con is that the browser can't cache the JS if it's in the page. If you reference it externally the browser will cache that file and not re-download it every time you hit a page. With it embedded it'll just add to the file-size of every page.
Also maintainability is something to keep in mind. If it's common JS it'll be a bit more of a pain to make a change when you need to update X number of HTML files' script blocks instead of one JS file.
Personally I've never run into an issue with external files vs embedded. The only time I have JS in the HTML itself is when I have something to bind on document load specifically for that page.
Caching is both a pro and potentially a con, if you are not handling it properly.
The pro is obvious, as it will improve page loading on every page load past the first one.
The con is that when you release new code, it may still be cached by the user's browser, so they may not get the update. This can easily be solved by changing the name on your js file. We automatically version our js with the file's timestamp, and then make sure that points to the create file in the web request through configuration on our web server (mod_rewrite, Apache).
Ask them to define "play nice". Aside from better logical organization, external js files don't have to be transmitted when already cached.
We use YUI compressor to automatically minify and combine external scripts into one when doing production/staging builds.
The only disadvantage I know is that another request must be made to the server in order to retrieve the external JS file. As was said before me you can use tools like YUI compressor to minimize the effects of this.
The advantage however would be that you can keep all of your JS code in a separate more maintainable format.
Another huge advantage to external javascript is the ability to check your syntax with Jslint. That, added to the ability to minify, combine and cache external scripts, makes internal javascript seem like a poor choice.

Resources