Resource Loader with ready() function call and that also loads CSS? - yepnope

Ideally I'm looking for a Javascript resource loader that will:
(1) Allow me to make "ready" calls like head.js does, e.g.
head.ready(function() {
$("#my").jquery_plugin();
});
// load jQuery whenever you wish bottom of the page
head.js("/path/to/jquery.js");
(2) Load CSS files like yepnope (which can also handle file names with hash on the end by using the css! prefix). I don't particularly need the conditional load functionality (at this stage).
(3) Ideally, only load resources once even if multiple calls are made (head.js does this automatically, yepnope does this with a filter).
At the moment I'm resorting to using both head.js and yepnope, as I haven't been able to find one that supports both the first two requirements. Obviously this is not ideal, as both together (with filters and prefixes) come to 7kb minified. I think this is a bit too heavy as a bootstrap script.
One option is roll my own using a combination of the two and strip out the functionality I don't need... but I'd rather stick to one that's going to be supported to reduce the pain of future updates etc.

So we stuck with a combination head.js and yepnope until something better comes out.

Related

Is there a way to split CSS using the asset pipeline?

The end result of what I'm trying to achieve is rather than having a single applications.css file, I want to split it into two sections -- a section I'm going to inline into the <head> tag, and then everything else. The reasoning behind this is that we want to inline the portion of our CSS that applies to above-the-fold elements of the page.
What I'm wondering is if there's a way to leverage the asset pipeline to remove the portion of the CSS that's inlined from the application.css file?
I feel like this is one of those problems where the way I'm thinking about the problem may be the biggest blocker, so totally open to alternative ways to think about this (i.e. not using the asset pipeline).
Just to make the problem more interesting, ideally I'd like a way to do this that's independent of the project itself, because there are multiple Rails front-ends where I'll need to apply this technique.
NOTE: Determining which part of the CSS I want to inline is not the problem -- that I have solved. What I'm looking for is a way to, as we continue to update our CSS in the future, make generating the two parts of the CSS a simple rake task, or integrated into the asset pipeline so it's done on deploy, etc.
Seems like the most straightforward way of doing this, if we assume that the asset pipeline (a.k.a. sprockets) is the right approach, is extending sprockets via an exporter -- the thing that actually writes assets to disk.
Update: I've instead looked at css-purge as an off-the-shelf solution for identifying which CSS is used on a given page. Having this as a separate tool that doesn't care that a given page was generated via a Rails app helps make it more useful.

Easy way to make a static copy of a web app for JSFiddle?

I often have a problem where I'm working on a dynamic web app with tons of front-end or back-end code and there is a CSS problem that just eludes me despite an hour of scratching my head. I know that StackOverflow could solve it in a second, and I'd like to post it, but I either have to
Make the app public along with steps to reproduce the state, or
Tediously copy out the DOM and assets (CSS) along with the current state.
Neither is very straightforward. Note that the DOM is dynamically generated so "View Source" won't cut it. Similarly, the CSS could be spread out across multiple files and I'd like to just grab it all at once.
Is there an easy way to copy out the DOM and all CSS as a single file so that I can insert it into something like JSFiddle and be on my way?
The quickest way to get all HTML on the page as-is is to paste this in the address bar:
javascript:alert(document.body.outerHTML)
You can also use the console, of course, but the above works even in old IE versions and is easier to copy/paste.
I don't think there's a good way to get the CSS at all, but you could try using a jQuery selector or similar to get the URLs:
$('link[type="text/css"]')
.each(function(x, link){
console.log(link.attributes.href.value)});
And downloading and concatenating the CSS.

Combining CSS files: per site or per page template?

We all know that we're supposed to combine our CSS into one file, but per site or per page? I've found pro's and cons to both.
Here's the scenario:
Large site
CSS files broken out into one file for global styles and many for modules
Solution A: Combine ALL the CSS files for the whole site into one file:
Best part is that the one file would be cached on every page after the initial hit! The downside is that naming convention for your selectors (classes and id's) becomes more important as the chance for a namespace collision increases. You also need a system for styling the same module differently on separate pages. This leads to extra selectors in your CSS which is more work for the browser. This can cause problems on mobile devices like the iPad that don't have as much memory and processing power. If you're using media queries for responsive design, you're troubles compound even further as you add in the extra styles.
Solution B: Combine one CSS file per page template:
(By page template I mean one layout, but many different pages, like an article page)
In this scenario, you lose most of the issues with selecting described above, but you also lose some of the cache advantages. The worst part of this technique is that if you have the same styles on 2 different page templates then they'll be download twice, once for each page! For instance, this would happen with all your global files. :(
Summary:
So, as is common in programming, neither solution is perfect, but if anyone has run into this and found an answer I'd love to hear it! Especially, if you know of any techniques that help with the selector issue of Solution A.
Of course, combine and minify all the global styles, like your site template, typography, forms, etc. I would also consider combining the most important and most frequently used module styles into the global stylesheet, certainly the ones that you plan to use on the home page or entry point.
Solution B isn't a good one: the user ends up downloading the same content for each unique layout/page when you could have just loaded parts of it from the last page's cache. There is no advantage whatsoever to this method.
For the rest, I would leave them separate (and minified) and just load them individually as needed. You can use any of the preloading techniques described on the Yahoo! Developer network's "Best Practices for Speeding Up Your Web Site" guide to load the user's cache beforehand:
Preload Components
By preloading components you can take advantage
of the time the browser is idle and request components (like images,
styles and scripts) you'll need in the future. This way when the user
visits the next page, you could have most of the components already in
the cache and your page will load much faster for the user. There are actually several types of preloading:
Unconditional preload - as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a
sprite image is requested onload. This sprite image is not needed on
the google.com homepage, but it is needed on the consecutive search
result page.
Conditional preload - based on a user action you make an educated guess where the user is headed next and preload accordingly. On
search.yahoo.com you can see how some extra components are requested
after you start typing in the input box.
As far as the conflicting selectors go: combining all the files and doing it any other way should not make a difference, this is a problem with your design. If possible, have all modules "namespaced" somehow, perhaps by using a common prefix for classes specific to the module, like blog-header or storefront-title. There is a concept called "Object-oriented CSS" that might reduce the need for lots of redundant CSS and module-specific class names, but this is normally done during the design phase, not something you can "tack on" to an existing project.
Less HTTP requests is better, but you have to take file size into consideration as well as user behavior and activity. The initial download time of the entry page is the most important thing, so you don't want to bog it down with stuff you won't use until later. If you really want to crunch the numbers, try a few different things and profile your site with Firebug or Chrome's developer tools.
i think you can make global.css that store style that need every template.
And you could make css in each template.
Or simply use css framework like lescss

Rendering javascript at the server side level. A good or bad idea?

Now a community wiki!
I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code.
Having said that, take a look at below ASP.net code for example:
hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');")
This is prescribing the client-side onclick event on the server-side.
As oppose to writing Javascript on the client-side:
$('a[rel=remove]').bind('click', function(event) {
return confirm('Are you sure you want to delete this?');
}
Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa?
I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons:
Server-side does what ever it needs to already, including data-validation, event delegation and etc; and
What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and
What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and
What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so on and so forth.
These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic.
Topics branching from this argument can reach:
Code management: is it easier to render everything from server-side?
Separation of concern: is it easier if client-side logic is separated to server-side logic?
Efficiency: which is more efficient both in terms of coding and running?
At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats.
Let me know your thoughts.
UPDATE1: It looks like all of us who have participated in this post have common thought; Good to know that there are others who think alike. Now to go convince the guys ;) Thanks everyone.
Your second example is vastly superior to the first example. Javascript is your behaviour layer and should be separate from your semantic markup (content) and CSS (presentation). There are a number of reasons this is better architecture:
Encourages progressive enhancement. As you mentioned, the backend code should work correctly in the absence of JS. You cannot rely on your clients having JS available. This way you build it once without JS and then can enhance the experience for those with JS (e.g. by adding clientside validation as well as serverside validation so that the client can get instant feedback)
Cleaner markup. Normally reduced download size. One reusable selector in a separate JS file that can be cached and shared between pages vs. a handler on each element.
All of your JS in one re-used place. e.g. if your code was opening a popup window and you decided to change the dimensions of the window you would change it once in the code in the JS file vs. having to change it on every individual inline handler.
There are lots of other arguments and reasons but they should get you started...
Also, from your example it appears that you have a normal link in your document which can delete content. This would also be a bad practice. Anything that deletes or updates content should be done on a POST (not GET) request. So it should be the result of submitting a form. Otherwise e.g. googlebot could accidentally delete all of your content by just crawling your page (and search engine robots don't execute JS so your alert wouldn't help there)
The two biggest differences i can think of up front are:
you lose the client side caching you would get if the javascript was in a separate js file
if you need to change your javascript, you have to recompile (extrapolate this to what happens after you have released your product: if you have to recompile then you need to redistribute binaries instead of just a modified js file)
it is easier to use the VS debugger if the javascript is in a separate file; you can just set a break point in that file, if you are generating the code server side then you have to use the running documents feature, find your generated code and then add the breakpoint, and that breakpoint has to be manually added everytime you re-run your app. Following on from that, if the code is in a separate file, then you can just make your tweak to the javascript code, F5 your browser page, and keep on debugging without having to stop and restart the debugger.
It should be mentioned that sometimes you have to insert js code from the server - for example if the bulk of your code is in a separate js file and you need to insert control identities in to the page for that code to work with. Just try to avoid that situation if possible.
Looks like you already know what to do. Rendering it on the server side is a bad idea.
The simple reasoning being you're Javascript lives both on the server side pages as well as in separate Javascript files (assuming you are using Javascript at all). It can become a debugging nightmare to fix things when everything is everywhere.
Were you not using any other Javascript besides what the server side scripts generate, it would probably be fine and manageable (forget what the unobtrusive movement says).
Secondly, if you have 100 links on the page, you will be repeating that same code in 100 places. Repetition is another maintenance and debugging nightmare. You can handle all links on all pages with one event handler and one attribute. That doesn't even need a second thought.
<Rant>
It's not easy to separate HTML and Javascript, and even CSS especially if you want some AJAX or UI goodness. To have total separation we would have to move to a desktop application model where all the front-end code is generated on the client side programmatically using Javascript, and all interaction with the server gets limited to pure data exchange.
Most upstream communication (client to server) is already just data exchange, but not the downstream communications. Many server-side scripts generate HTML, merge it with data and spit it back. That is fine as long as the server stays in command of generating the HTML views. But when fancy Javascript comes onboard and starts appending rows to tables, and div's for comments by replicating the existing HTML structure exactly, then we have created two points at which the markup gets generated.
$(".comments").append($("<div>", {
"id": "123",
"class": "comment",
"html": "I would argue this is still bad practice..."
}));
Maybe this is not as big a nightmare (depending on the scale), but it can be a serious problem too. Now if we change the structure of the comments, the change needs to be done at two places - the server side script and templates where content is initially generated, and the Javascript side which dynamically adds comments after page load.
A second example is about applications that use drag and drag. If you can drag div's around the page, they would need to be taken off the regular page flow, and positioned absolutely or relatively with precise coordinates. Now since we cannot create classes beforehand for all possible coordinates (and that would be stupid to attempt), we basically inject styles directly in the element. Our HTML then looks like:
<div style="position: absolute; top: 100px; left: 250px;">..</div>
We have screwed up our beautiful semantic pages, but it had to be done.
</Rant>
Semantic, and behavioral separation aside, I would say is basically boils down to repetition. Are you repeating the code unnecessarily. Are multiple layers handling the same logic. Is it possible to shove all of it into a single layer, or cut down on all repetition.
You and the other people answering the question have already listed reasons why it is better not to having the server side code spit intrinsic event attributes into documents.
The flip side of the coin is that doing so is quick and simple (at least in the short term).
IMO, this doesn't come close to outweighing the cons of the approach, but it is a reason.
For the code in your example it doesn't really matter. The code isn't using any information that is only available at the server side, so it's just as easy to bind the event in client side code.
Sometimes you want to use some information that is available at the server side to decide whether the event should be added or not, or to create the code for the event, for example:
if (categoryCanBeDeleted) {
hlRemoveCategory.Attributes.Add(
"onclick",
"return confirm('Are you sure you want to delete the " + categoryType + "?');"
);
}
If you would do this at the client side, you have to put this information into the page somehow so that the client side code also has access to it.

jQuery: Setting Functions Per Page

I currently have a single jQuery script page that I include in my ScriptManager on my MasterPage (I use ASP.NET). All of my custom scripts for all my pages go in this page. I know some people like to break it up into multiple pages but I prefer having them all in one place. Anyways, I have one $(document).ready(function(){ //do stuff// }); on this page. All of my other functions I name and then place in the document.ready function, ie:
$(document).ready(function(){
Function1(); //necessary for page1.aspx, page2.aspx but not 3 or 4
Function2(); //etc., necessary for some but not other pages
});
function Function1() {
// whatever, etc //
}
My question centers around the efficiency of this. A lot of these functions only have any sort of application on certain pages, but not on all. Would it be better to include some page-specific coding, ie, determine what page we're on, and then if we're on the correct page, go ahead and perform the function, or does it not even matter? I was under the impression that it didn't matter, but I would like to be sure.
Thanks
You might consider putting everything into an external .js file. Then the browser only has to download the code once.
If the functions cause errors on pages where they're not not needed, you should only include them on pages where they are needed.
Otherwise, it depends on how long it takes the functions to execute. If it's not noticeably long, I would say don't worry about it. It's not worth the fragility you'd introduce by adding a bunch of if statements.
I can't say if this is a definitive answer, but where possibly in any project i try to display and show as little coding as possible that is not relevant to that page.
So in the case of JQuery, i would only display the javascript code required for that page by doing a server side check.
But i guess this would really depend on how large the code your using, if it's a small one line script in jquery then i don't think you're really going to lose out, but if you have a lot of logic in there, then i would recommend breaking it down per page that requires it.
Depending on the number of page-specific functions, it could matter a lot.
If you have a small amount of extra data then you won't see any noticeable performance decreases. However, as your code base grows it could become a mess to maintain that code.
Additionally, you are pushing extra bits across the wire in every page request, as traffic increases that may add up to extra costs on your bandwidth.
Yes. It matters. The real question is how complicated those functions are. If you think the code to check if the functions are needed would be more time consuming than running the code, then it's not necessary.
So that is the question you need to ask yourself (or test): Does it take longer to test if I need the function, or just to run them?

Resources