ASP.NET Client vs Server View Rendering - asp.net

Using ASP.NET MVC: I am caught in the middle between rendering my views on the server vs in the client (say jquery templates). I do not like the idea of mixing the two as I hear some people say. For example, some have said they will render the initial page (say a list of a bunch of comments) server side, and then when a new comment is added they use client side templating. The requirement to have the same rendering logic in two different areas of your code makes me wonder how people convince themselves it is worth it.
What are the reasons you use to decide which to use where?
How does your argument change when using ASP.NET Web Forms?

One reason that people do that is because they want their sites to get indexed by search engines but also want to have the best user experience, so are writing client code for that. You have to decide what makes sense given the constraints and goals you have. Unfortunately, what makes the most business sense won't always seem to make the most sense from a technical perspective.

One advantage to server-side rendering is that your clients don't have to use javascript in order for your pages to be functional. If you're relying on JQuery templates, you pretty much have to assume that your page won't have any content when rendered without javascript. For some people this is important.
As you say, I would prefer not to use the same rendering logic twice, since you run the risk of letting it get out of sync.
I generally prefer to just leverage partial views to generate most content server-side. Pages with straight HTML tend to render a bit faster than pages that have to be "built" after they've loaded, making the initial load a little speedier.
We've developed an event-based AJAX architecture for our application which allows us to generate a piece of content in response to the result of an action, and essentially send back any number of commands to the client-side code to say "Use the results of this rendered partial view to replace the element with ID 'X'", or "Open a new modal popup dialog with this as the content." This is beneficial because the server-side code can have a lot more control over the results of an AJAX request, without having to write client-side code to handle every contingency for every action.
On the other hand, putting the server in control like this means that the request has to return before the client-side knows what to do. If your view rendering was largely client-based, you could make something "happen" in the UI (like inserting the new comment where it goes) immediately, thereby improving the "perceived speed" of your site. Also, the internet connection is generally the real speed bottleneck of most websites, so just having less data (JSON) to send over the wire can often make things more speedy. So for elements that I want to respond very smoothly to user interaction, I often use client-side rendering.
In the past, search-engine optimization was a big issue here as well, as Jarrett Widman says. But my understanding is that most modern search engines are smart enough to evaluate the initial javascript for pages they visit, and figure out what the page would actually look like after it loads. Google even recommends the use of the "shebang" in your URLs to help them know how to index pages that are dynamically loaded by AJAX.

Related

Gridviews/Tables paging. POST, GET or AJAX?

I am building my own GridView in an ASP.NET project
I am drawing out my plans and I was wondering what the best solution is to a simple problem, paging and sorting.
The fast and easy way is using submit buttons (or similar) and POSTING the form back. That's also how the ASP.NET gridview works.
pro:
less overhead
con:
backbuttons
The second method is using links and the URL with GET requests.
pro:
backbuttons work just fine
direct link to certain position
con:
less reusable because of the dependence on url
The third method is AJAX
pro:
little overhead
con:
harder to implement
What design/solution would you pick and why?
Am i overlooking some pros and cons?
I add some extra comments to think about.
-The second method is using links and the URL with GET requests.
This is the one that you need to use, if your need web spiders (google) knows all the pages of your site, and be SEO friendly. This method have the problem that you can not have viewstate and each time you must render the page that you see on the url parameters with out knowing anything else.
With this case you probably have more problems if you wish to make edit on one line
-The fast and easy way is using submit buttons (or similar) and POSTING the form back
This is the method if you won to have many functionality on code behind because with the post back you have all the previous action that you have done, and the viewstate is working and can be used for that. Is not SEO friendly and if you like to make it you need extra code to write on the url just the page that you are now and need to land.
-The third method is AJAX
This is the method that must co-exist with the previous and not alone for the case that the browser fails to run javascript for any reason. If you do not care about that, the rest is that this method is also not SEO friendly and you need to make it, is cool, modern, and is a must for modern site, but if you going to make difficult things then you may end up with many issues that must be solved.
To summarize:
More than show data ? Post Request : Get Request ; // ToDo: make it ajax

How to keep DRY a Ajax web forms solution

At the moment we have a solution which is Web forms ASP.Net 4.0. We do a number of things such as using web methods and services either calling them using the standard web forms way or sometimes to reduce the footprint directly calling them with jQuery ajax posts and gets.
We are looking to improve the way we work but we have heavy constricts regarding how the solution is at the moment and not being able to completely rewrite it.
Updating the page using Ajaxs for data, forms and for example pulling "the next 20" items and displaying them on the page it what I would like to heavily stream line.
Using template's due as PURE and jQuery Templates is fantastic way to produce fast calls back and forth between the servers but results in having two copies of the html. (the template for the jQuery and the code in the actual first render of the page)
We have thought about possible producing a empty template and then always populating it via json data we post down to the server but I feel this isn't how things should be done...
can anyone reckoned the best way we can do this without having two copies of our 'template' (e.g. a row of a table)
You mean you have a template in asp and the same template in javascript, but you'd rather just have 1 or the other?
I think that is really subjective. It is always different based on use case. That being said I'd do it by modifying my views and templates. My views (non-js) would simply have containers for that dynamic content. In other words I'd never load the dynamic portions of content into the views initially. Rather, on page load I would simply load up the template and the json that fills it in.
If you think about it that's 2 more requests, but it makes your life easier. The user also is able to see something on the page sooner.
This is one of those questions that really depends on what you are doing. There are trade-offs to be analyzed with every solution.

creating an ajax application

I have several pages of my web application done. They all use the same master page so they all all look very similar, except of course for the content. It's technically possible to put a larger update panel and have all the pages in one big update panel so that instead of jumping from page to page, the user always stays on the same page and the links trigger __doPostback call-backs to update with the appropriate panel.
What could be the problem(s) with building my site like this?
Well, "pages" provide what is known as the "Service Interface layer" between your business layer and the http aspect of the web application. That is all of the http, session and related aspects are "converted" into regular C# types (string, int, custom types etc.) and the page then calls methods in the business layer using regular C# calling conventions.
So if you have only one update panel in your whole application, what you're effectively saying is that one page (the code behind portion) will have to handle all of the translations between the http "ness" and the business layer. That'll just be a mess from a maintainable perspective and a debugging perspective.
If you're in a team that each of you will be potentially modifying the same code behind. This could be a problem for some source control systems but one or more of you could define the same method name with the same signature and different implementations. That's won't be easy to merge.
From a design perspective, there is no separation of concerns. If you have a menu or hyper link on a business application, it most likely means a difference concern. Not a good design at all.
From a performance perspective you'll be loading all of your systems functionality no matter what function your user is actually doing.
You could still have the user experience such that they have the one page experience and redirect the callback to handlers for the specific areas on concern. But I'd think real hard about the UI and the actual user experience you'll be providing. It's possible that you'll have a clutter of menus and other functionality when you combine everything into one page.
Unless the system you are building a really simple and has no potential to grow beyond what it currently is and provide your users with a one page experience is truly provide value and an improved user experience and wouldn't go down this route.
When you have a hammer, everything looks like a nail.
It really depends on what you are trying to do. Certainly, if each page is very resource-intensive, you may have faster load times if you split them up. I'm all for simplicity, though, and if you have a clean and fast way of keeping users on one page and using AJAX to process data, you should definitely consider it.
It would be impossible to list too many downsides to an AJAX solution, though, without more details about the size and scope of the Web application you are using.

Rendering javascript at the server side level. A good or bad idea?

Now a community wiki!
I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code.
Having said that, take a look at below ASP.net code for example:
hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');")
This is prescribing the client-side onclick event on the server-side.
As oppose to writing Javascript on the client-side:
$('a[rel=remove]').bind('click', function(event) {
return confirm('Are you sure you want to delete this?');
}
Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa?
I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons:
Server-side does what ever it needs to already, including data-validation, event delegation and etc; and
What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and
What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and
What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so on and so forth.
These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic.
Topics branching from this argument can reach:
Code management: is it easier to render everything from server-side?
Separation of concern: is it easier if client-side logic is separated to server-side logic?
Efficiency: which is more efficient both in terms of coding and running?
At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats.
Let me know your thoughts.
UPDATE1: It looks like all of us who have participated in this post have common thought; Good to know that there are others who think alike. Now to go convince the guys ;) Thanks everyone.
Your second example is vastly superior to the first example. Javascript is your behaviour layer and should be separate from your semantic markup (content) and CSS (presentation). There are a number of reasons this is better architecture:
Encourages progressive enhancement. As you mentioned, the backend code should work correctly in the absence of JS. You cannot rely on your clients having JS available. This way you build it once without JS and then can enhance the experience for those with JS (e.g. by adding clientside validation as well as serverside validation so that the client can get instant feedback)
Cleaner markup. Normally reduced download size. One reusable selector in a separate JS file that can be cached and shared between pages vs. a handler on each element.
All of your JS in one re-used place. e.g. if your code was opening a popup window and you decided to change the dimensions of the window you would change it once in the code in the JS file vs. having to change it on every individual inline handler.
There are lots of other arguments and reasons but they should get you started...
Also, from your example it appears that you have a normal link in your document which can delete content. This would also be a bad practice. Anything that deletes or updates content should be done on a POST (not GET) request. So it should be the result of submitting a form. Otherwise e.g. googlebot could accidentally delete all of your content by just crawling your page (and search engine robots don't execute JS so your alert wouldn't help there)
The two biggest differences i can think of up front are:
you lose the client side caching you would get if the javascript was in a separate js file
if you need to change your javascript, you have to recompile (extrapolate this to what happens after you have released your product: if you have to recompile then you need to redistribute binaries instead of just a modified js file)
it is easier to use the VS debugger if the javascript is in a separate file; you can just set a break point in that file, if you are generating the code server side then you have to use the running documents feature, find your generated code and then add the breakpoint, and that breakpoint has to be manually added everytime you re-run your app. Following on from that, if the code is in a separate file, then you can just make your tweak to the javascript code, F5 your browser page, and keep on debugging without having to stop and restart the debugger.
It should be mentioned that sometimes you have to insert js code from the server - for example if the bulk of your code is in a separate js file and you need to insert control identities in to the page for that code to work with. Just try to avoid that situation if possible.
Looks like you already know what to do. Rendering it on the server side is a bad idea.
The simple reasoning being you're Javascript lives both on the server side pages as well as in separate Javascript files (assuming you are using Javascript at all). It can become a debugging nightmare to fix things when everything is everywhere.
Were you not using any other Javascript besides what the server side scripts generate, it would probably be fine and manageable (forget what the unobtrusive movement says).
Secondly, if you have 100 links on the page, you will be repeating that same code in 100 places. Repetition is another maintenance and debugging nightmare. You can handle all links on all pages with one event handler and one attribute. That doesn't even need a second thought.
<Rant>
It's not easy to separate HTML and Javascript, and even CSS especially if you want some AJAX or UI goodness. To have total separation we would have to move to a desktop application model where all the front-end code is generated on the client side programmatically using Javascript, and all interaction with the server gets limited to pure data exchange.
Most upstream communication (client to server) is already just data exchange, but not the downstream communications. Many server-side scripts generate HTML, merge it with data and spit it back. That is fine as long as the server stays in command of generating the HTML views. But when fancy Javascript comes onboard and starts appending rows to tables, and div's for comments by replicating the existing HTML structure exactly, then we have created two points at which the markup gets generated.
$(".comments").append($("<div>", {
"id": "123",
"class": "comment",
"html": "I would argue this is still bad practice..."
}));
Maybe this is not as big a nightmare (depending on the scale), but it can be a serious problem too. Now if we change the structure of the comments, the change needs to be done at two places - the server side script and templates where content is initially generated, and the Javascript side which dynamically adds comments after page load.
A second example is about applications that use drag and drag. If you can drag div's around the page, they would need to be taken off the regular page flow, and positioned absolutely or relatively with precise coordinates. Now since we cannot create classes beforehand for all possible coordinates (and that would be stupid to attempt), we basically inject styles directly in the element. Our HTML then looks like:
<div style="position: absolute; top: 100px; left: 250px;">..</div>
We have screwed up our beautiful semantic pages, but it had to be done.
</Rant>
Semantic, and behavioral separation aside, I would say is basically boils down to repetition. Are you repeating the code unnecessarily. Are multiple layers handling the same logic. Is it possible to shove all of it into a single layer, or cut down on all repetition.
You and the other people answering the question have already listed reasons why it is better not to having the server side code spit intrinsic event attributes into documents.
The flip side of the coin is that doing so is quick and simple (at least in the short term).
IMO, this doesn't come close to outweighing the cons of the approach, but it is a reason.
For the code in your example it doesn't really matter. The code isn't using any information that is only available at the server side, so it's just as easy to bind the event in client side code.
Sometimes you want to use some information that is available at the server side to decide whether the event should be added or not, or to create the code for the event, for example:
if (categoryCanBeDeleted) {
hlRemoveCategory.Attributes.Add(
"onclick",
"return confirm('Are you sure you want to delete the " + categoryType + "?');"
);
}
If you would do this at the client side, you have to put this information into the page somehow so that the client side code also has access to it.

Web form design - multiple pages vs single page with AJAX

We are designing a data capture process with many questions (using ASP.NET), which can repeat (e.g. enter all your vehicles). In rare cases this could be over 100 repeating groups.
Therefore I've stated that rather than having one huge form that we split the application into multiple forms, using logical points for page splits (e.g. personal details) in a wizard style.
However, there is debate within the team as to whether we should be using AJAX/Javascript to have a single form. Suggested approaches to this appear to be:
Load in all the steps in one go, and just use Javascript to toggle.
Load in step 1 and load the other steps using AJAX.
Submit the form using AJAX and load the next step using AJAX.
Option 1 to me defeats the entire point, as you'd end up loading a massive HTML tree - some of the pages can be large. This would be more acceptable if there was a small number of steps with very few questions on each.
Option 2 to me seems overly complex, plus if the user clicks the next button, can you guarantee the next page has loaded? Also, what happens if input from page 1 is required for page 2?
Option 3 seems doable, but I'd have though the response time of doing the AJAX processing would actually be slower than doing a standard form submit, as there would be more processing involved by the client browser. Also this approach is more complex than a standard form submit.
Do you think my approach is correct (standard form posts)? I've read that typically AJAX is used to enhance the functionality of a page, rather than trying to emulate multiple pages.
The Single Page Interface Manifesto talks about how you can build SPI web sites without losing bookmarking, Back/Forward buttons (history navigation) or SEO.
With regards to performance, this is something you can test. But if you go the AJAX route, you should provide an alternative for those users without Javascript. The extra work required may be enough to push the idea off the board.
To my mind, options 2 & 3 lie at ends of a spectrum. You can use AJAX to submit whatever information is required for the next step. Option 2 submits nothing. Submit the entire form, and you have option 3. You'll probably want something in between.
As for performance, there is some processing to be done (collect form data, data transmission, parsing, splicing the result into the document), but the browser will be doing the same tasks with a traditional form; AJAX just makes parts of the process more visible. Since the data is smaller in size (part of a form rather than a full page), performance shouldn't be much worse and may in fact be better. Also, modern javascript engines are very fast.
In any case, listen to MikeB's advice about degrading gracefully when JS isn't supported, unless this is for an internal app where you can depend on browser make & configuration.

Resources