I am about to implement a function that loads potentially large set of data (~1000 rows with ~10 columns). I am planning to implement a infinite scolling solution (ajax, jQuery, asmx) as a performance measure. However, if a user has javascript disabled or the googlebot comes a-crawling, I would like to generate the entire set of data all as once, so that no data becomes inaccessible for either of those two scenarios.
I'm not sure what approach to use here. Should I look towards the noscript-tag perhaps?
In my experience if you're expecting a 1000 rows and expect any sort of traffic you would need to offer two scenarios.
I would use the noscript tag and then offer a paginated view for non js users. Or, you can do as I have done in the past and simply explain that your application requires that javascript be left on (also through the noscript tag). Anyone that runs around the internet with javascript turned off is most likely going to be used to the internet not working the way it should, or only getting a partial experience.
Try to use a client-side pagination class with JS,
its lightweight, very user friendly and if browser doesnt submit JS,
No problem, he will see a huge data table :)
http://en.newinstance.it/2006/09/27/client-side-html-table-pagination-with-javascript/
Related
Using ASP.NET MVC: I am caught in the middle between rendering my views on the server vs in the client (say jquery templates). I do not like the idea of mixing the two as I hear some people say. For example, some have said they will render the initial page (say a list of a bunch of comments) server side, and then when a new comment is added they use client side templating. The requirement to have the same rendering logic in two different areas of your code makes me wonder how people convince themselves it is worth it.
What are the reasons you use to decide which to use where?
How does your argument change when using ASP.NET Web Forms?
One reason that people do that is because they want their sites to get indexed by search engines but also want to have the best user experience, so are writing client code for that. You have to decide what makes sense given the constraints and goals you have. Unfortunately, what makes the most business sense won't always seem to make the most sense from a technical perspective.
One advantage to server-side rendering is that your clients don't have to use javascript in order for your pages to be functional. If you're relying on JQuery templates, you pretty much have to assume that your page won't have any content when rendered without javascript. For some people this is important.
As you say, I would prefer not to use the same rendering logic twice, since you run the risk of letting it get out of sync.
I generally prefer to just leverage partial views to generate most content server-side. Pages with straight HTML tend to render a bit faster than pages that have to be "built" after they've loaded, making the initial load a little speedier.
We've developed an event-based AJAX architecture for our application which allows us to generate a piece of content in response to the result of an action, and essentially send back any number of commands to the client-side code to say "Use the results of this rendered partial view to replace the element with ID 'X'", or "Open a new modal popup dialog with this as the content." This is beneficial because the server-side code can have a lot more control over the results of an AJAX request, without having to write client-side code to handle every contingency for every action.
On the other hand, putting the server in control like this means that the request has to return before the client-side knows what to do. If your view rendering was largely client-based, you could make something "happen" in the UI (like inserting the new comment where it goes) immediately, thereby improving the "perceived speed" of your site. Also, the internet connection is generally the real speed bottleneck of most websites, so just having less data (JSON) to send over the wire can often make things more speedy. So for elements that I want to respond very smoothly to user interaction, I often use client-side rendering.
In the past, search-engine optimization was a big issue here as well, as Jarrett Widman says. But my understanding is that most modern search engines are smart enough to evaluate the initial javascript for pages they visit, and figure out what the page would actually look like after it loads. Google even recommends the use of the "shebang" in your URLs to help them know how to index pages that are dynamically loaded by AJAX.
I understand that frames are a lot more typing work to implement than Iframes, and that they require a lot more styling than Iframes. I am currently working on a website which must download some content (in fact, an entire set of webpages) from another website, one - by - one of course depending on the user's action on the main website. Iframes seem to be a short and rowdy way to implement this requirement, but what I am worried about is performance and integrity.
I would like some advice on what I would rather use when the following criteria is met:
The pages that must be downloaded onto my webpage are quite large (width and height)
Contains multiple images
Experiences occasional downtimes (maintainence)
any ideas for a man in wonder?
At this point, go with iFrames:
iFrame is HTML5. Frameset is obsolete in HTML5.
You have to load pages into each Frameset. iFrames can be embedded anywhere in a document.
You can style, hide, resize either, but iFrames are much easier to work with in this regard.
I've seen cases where the developer went with Frameset because he couldn't get the iFrame to size properly, but this isn't too big a deal with a little Javascript (if even that). The only reason to use a Frameset is if you don't fear it's eventual deprecation with modern browsers, and/or if you can't get iFrames to size the way you want based on the content you're integrating and need a quick solution.
If this is about display of 3rd party data in your site, you could use data feeds from the other sites if they're available, or use a screen scraper to extract the information you need, then display it in your own way on the page.
Unless it needs to look exactly like to other page.
Check out this link on screen scraping for ASP.NET
I am working with ASP.net and I have two gridvew controls and some link buttons. Now, to bind these gridviews, I have to call web services and data access. Since I am pulling large amount of data, the page loads slow. I am wondering if there is a way I could do partial page load, meaning that I would like show the link buttons first then show rest of gridview as data are available (to bind to gridivews).
Is there a way I can accomplish this? (Preferably, without AJAX).
Thanks.
If you want a truly AJAX-less method, you could go with the ol' trusty IFrame tags and have your gridviews be stand alone pages. I believe the page will render around the IFrames while the IFrames themselves load.
NOTE: I also am not advocating this a the best solution, but it may meet the intent of this scenario.
Not without AJAX. But can you define what you mean by "without AJAX"?
Have you seen PageMethods? They may do what you intend, in a way that is palatable to you.
Alternately, you may mean "without UpdatePanels" in which case, are you familiar with XMLHttpRequests? (Note: I do not intend that an XHR is the appropriate solution here, I'm probing for familiarity with the topic)
First a couple of things, you may want to limit the data you are grabbing. If you are using a gridview this data will be stored in view state, causing huge overhead. If you are only displaying data, consider using repeater or datalists they are lighter. In any case you should be using pagination, though you may have to implement a custom pagination solution for the repeater.
Now a community wiki!
I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code.
Having said that, take a look at below ASP.net code for example:
hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');")
This is prescribing the client-side onclick event on the server-side.
As oppose to writing Javascript on the client-side:
$('a[rel=remove]').bind('click', function(event) {
return confirm('Are you sure you want to delete this?');
}
Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa?
I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons:
Server-side does what ever it needs to already, including data-validation, event delegation and etc; and
What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and
What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and
What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so on and so forth.
These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic.
Topics branching from this argument can reach:
Code management: is it easier to render everything from server-side?
Separation of concern: is it easier if client-side logic is separated to server-side logic?
Efficiency: which is more efficient both in terms of coding and running?
At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats.
Let me know your thoughts.
UPDATE1: It looks like all of us who have participated in this post have common thought; Good to know that there are others who think alike. Now to go convince the guys ;) Thanks everyone.
Your second example is vastly superior to the first example. Javascript is your behaviour layer and should be separate from your semantic markup (content) and CSS (presentation). There are a number of reasons this is better architecture:
Encourages progressive enhancement. As you mentioned, the backend code should work correctly in the absence of JS. You cannot rely on your clients having JS available. This way you build it once without JS and then can enhance the experience for those with JS (e.g. by adding clientside validation as well as serverside validation so that the client can get instant feedback)
Cleaner markup. Normally reduced download size. One reusable selector in a separate JS file that can be cached and shared between pages vs. a handler on each element.
All of your JS in one re-used place. e.g. if your code was opening a popup window and you decided to change the dimensions of the window you would change it once in the code in the JS file vs. having to change it on every individual inline handler.
There are lots of other arguments and reasons but they should get you started...
Also, from your example it appears that you have a normal link in your document which can delete content. This would also be a bad practice. Anything that deletes or updates content should be done on a POST (not GET) request. So it should be the result of submitting a form. Otherwise e.g. googlebot could accidentally delete all of your content by just crawling your page (and search engine robots don't execute JS so your alert wouldn't help there)
The two biggest differences i can think of up front are:
you lose the client side caching you would get if the javascript was in a separate js file
if you need to change your javascript, you have to recompile (extrapolate this to what happens after you have released your product: if you have to recompile then you need to redistribute binaries instead of just a modified js file)
it is easier to use the VS debugger if the javascript is in a separate file; you can just set a break point in that file, if you are generating the code server side then you have to use the running documents feature, find your generated code and then add the breakpoint, and that breakpoint has to be manually added everytime you re-run your app. Following on from that, if the code is in a separate file, then you can just make your tweak to the javascript code, F5 your browser page, and keep on debugging without having to stop and restart the debugger.
It should be mentioned that sometimes you have to insert js code from the server - for example if the bulk of your code is in a separate js file and you need to insert control identities in to the page for that code to work with. Just try to avoid that situation if possible.
Looks like you already know what to do. Rendering it on the server side is a bad idea.
The simple reasoning being you're Javascript lives both on the server side pages as well as in separate Javascript files (assuming you are using Javascript at all). It can become a debugging nightmare to fix things when everything is everywhere.
Were you not using any other Javascript besides what the server side scripts generate, it would probably be fine and manageable (forget what the unobtrusive movement says).
Secondly, if you have 100 links on the page, you will be repeating that same code in 100 places. Repetition is another maintenance and debugging nightmare. You can handle all links on all pages with one event handler and one attribute. That doesn't even need a second thought.
<Rant>
It's not easy to separate HTML and Javascript, and even CSS especially if you want some AJAX or UI goodness. To have total separation we would have to move to a desktop application model where all the front-end code is generated on the client side programmatically using Javascript, and all interaction with the server gets limited to pure data exchange.
Most upstream communication (client to server) is already just data exchange, but not the downstream communications. Many server-side scripts generate HTML, merge it with data and spit it back. That is fine as long as the server stays in command of generating the HTML views. But when fancy Javascript comes onboard and starts appending rows to tables, and div's for comments by replicating the existing HTML structure exactly, then we have created two points at which the markup gets generated.
$(".comments").append($("<div>", {
"id": "123",
"class": "comment",
"html": "I would argue this is still bad practice..."
}));
Maybe this is not as big a nightmare (depending on the scale), but it can be a serious problem too. Now if we change the structure of the comments, the change needs to be done at two places - the server side script and templates where content is initially generated, and the Javascript side which dynamically adds comments after page load.
A second example is about applications that use drag and drag. If you can drag div's around the page, they would need to be taken off the regular page flow, and positioned absolutely or relatively with precise coordinates. Now since we cannot create classes beforehand for all possible coordinates (and that would be stupid to attempt), we basically inject styles directly in the element. Our HTML then looks like:
<div style="position: absolute; top: 100px; left: 250px;">..</div>
We have screwed up our beautiful semantic pages, but it had to be done.
</Rant>
Semantic, and behavioral separation aside, I would say is basically boils down to repetition. Are you repeating the code unnecessarily. Are multiple layers handling the same logic. Is it possible to shove all of it into a single layer, or cut down on all repetition.
You and the other people answering the question have already listed reasons why it is better not to having the server side code spit intrinsic event attributes into documents.
The flip side of the coin is that doing so is quick and simple (at least in the short term).
IMO, this doesn't come close to outweighing the cons of the approach, but it is a reason.
For the code in your example it doesn't really matter. The code isn't using any information that is only available at the server side, so it's just as easy to bind the event in client side code.
Sometimes you want to use some information that is available at the server side to decide whether the event should be added or not, or to create the code for the event, for example:
if (categoryCanBeDeleted) {
hlRemoveCategory.Attributes.Add(
"onclick",
"return confirm('Are you sure you want to delete the " + categoryType + "?');"
);
}
If you would do this at the client side, you have to put this information into the page somehow so that the client side code also has access to it.
When does it become unavoidable and when would you want to choose JavaScript over server side when you have a choice?
When you need to change something on your page without reloading it.
Designer perspective:
When you want to give more interactivity to your web page
When you want to load stuff without reloading (i.e.: ajax for example)
When you shouldn't use:
When You don't want to spend 1000 hours in pointless tries to disable the back arrow of your browser :)
Google maps would not have been possible without JavaScript. At least not in the form we know (and love) it today. So, depending on your ambition (and requirements) clearly: sometimes JavaScript is unavoidable, even though there exist technologies that take another approach that might have worked equally well (Java Applets, RIA technologies, etc).
If I had the choice I would probably chose JavaScript over a server-side implementation for a large number of applications, but then again, it is not a black or white picture. The server will remain important for web applications for a very long time to come.
Yes, you do. Now take out your textbook and turn to Chapter One. ;)
Honestly, though, to answer your question, I don't think it's ever "unavoidable," no; you can always code for the absence of JavaScript. (Indeed, usability best practices dictate you at least try to "gracefully degrade" the user experience for browsers that don't support it, or for users who choose not to.) In the beginning, of course, there was no JavaScript -- but there was still the Web. It just, well, kinda sucked.
There's no simple answer, but if you absolutely must have one, the most straight-ahead one I can think of is to use JavaScript to improve the user experience. Secondarily, use it to shift the workload from your server toward the browser (Hello, Ajax!) -- validation, state, etc. Those are two big reasons, broadly stated, IMO.
At a stretch, you could do everything via server-side programming. Some things will be painfully slow and difficult to pull off right, but it is possible. If you want to see something clever done with nothing more than CSS, try out the search feature on lxr.mozilla.org with scripting disabled.
Practically though, the best places to use javascript are where it'd otherwise disrupt the flow of what the user's doing - the AJAXy things on here are all pretty good examples, as is the realtime preview (everyone should have one of these!)
If it provides significant benefit, then it's completely fine to use it. But please, don't make it required unless the server-only equivalent is needlessly complicated.
I never want to choose JavaScript, but it becomes unavoidable when clients want a decent web app. JavaScript has the unique feature of low latency feedback in a browser - server side code doesn't.
Also, there are a (rather limited) number of times when it's actually easier to bust out some JQuery for formatting rather than dealing with ASP.NET's event model to manipulate client elements. But, I'd say those are relatively few.
If you are looking for a responsive UI and want to avoid JavaScript, consider some of the RIA technologies such as Flex, Silverlight or JavaFX. I've been developing with Flex since v1.5 and find it very capable and productive. Silverlight is getting significantly better with each release, too.
GWT is a nice system that compiles java code into javascript which becomes kind of like a "Machine language" for the browser--you never have to consider it.
I believe most google apps are written using GWT. It's pretty slick.
All your source code is pretty much straight Java using a library somewhat like Swing.
JavaScript is not a necessity but, coupled with the DOM API, it provides a very useful medium for gradually improving the user experience of your site. Obviously the extent to which this is true is dependent on how well you execute these enhancements, don't just use JavaScript for the sake of it; it's a design decision, and should not be taken lightly.
Sharing the load between client and server.
Try to keep it natural. Use it to enhance little things, do not built your whole application on JS.
A good way to measure the breadth of your skills is when "Do I have to use ..." starts disappearing from your conversation. Hopefully you get to the point where a language is just a language, and you develop a feel for what the right tool is for the job, and can use it as comfortably as any other language.
If it's any consolation, there's increasing evidence that there's enough conventional wisdom and available toolsets that developers are increasingly preferring javascript. I personally like it for its conciseness and easy exensibility - once you get to know it.
You don't have to.
But if you want to provide UI extras like autocomplete, drag and drop, richer form entry fields, etc.…, Javascript is your only answer.
Of course you can abstract that Javascript generation out to the server side but you'd still be tussling with Javascript, albeit via programmatic code generation.
When you create a Rich Internet Application which gets loaded into a browser and which communicate to a Web Server with an Open API (like SOAP, JSON etc)
"Javascript over server side " JavaScript is client side, not server side.
You would still need to run server side stuff to get the data from the server. Javascript just gives you more control on how the client receives the data from the server.
JavaScript is unavoidable when you want a certain dynamic feature on your website which is not supported by default.
Dynamic feature
- without JavaScript: you can use CSS to change background of an element when the mouse is over the element. You can do the same thing with using JavaScript.
- Only in JavaScript: When you click the element with your mouse, the element disappears. Don't think it is possible in CSS(maybe you be able to use :active in CSS on some elements. Never tested it myself).