I'm looking for some help on my discussion. We're discussing two solutions to a customization problem. One uses (nested) master-pages. One master page per customized page. Plus there is a standard master page for all pages. The second uses a standard page, which redirects to the custom page if it exists.
My question is, which is more desirable? Having to Load 2 master pages every time OR only having to load 1 master page and sometimes redirecting (Response.Redirect or Server.Transfer) to the customized page.
I can't really find any information on master page performance. Should I just think of them as another (somewhat inverted) user-control or should they be used lightly?
Edit:
You can assume Response.Redirect for the transfer.
You can assume the Redirect occurs in the PreInit stage of the lifecycle.
Master pages are in a sense inverted user controls (they "surround" content as opposed to user control content that is pushed into the page). There isn't a rule of thumb that says use master pages lightly--a master page's performance cost is directly proportionate to how much code (HTML and otherwise) it causes to be pushed into the HTTP data stream.
For my money, I'd land on the side what makes for the most readable and maintainable code.
so, it's a webform application no matter what, right? Did you ponder Model View Controller perspective?
Regarding the nasted MasterPages, I did that on a client CRM application and I can assure you, you do not fell any problems loading unless the usual ones about the internet and server speed, for that, and if you think it's the best way to accomplish or objectives, go for it.
Each Master Page will fire their events and normally we think about ohh well, 2 master pages plus a content or several content pages could be a problem pointing the loading time, but this takes miliseconds performing all this, so... be safe.
the only thing that you need to take care is, don't fell up with all the javascript addons that you can find, choose a nice library and use only that, JQuery (now on version 1.3.0 would be the best choice).
at least that is my opinion :)
Related
We're building a Kentico 8.2 site using ASPX+portal model. Looking at the rendered HTML on my live site I can see a lot of unnecessary Javascript that Kentico has dumped into my page. What's more this is occurring at the top of my page at the top of the form element.
For example, it's rendering the ASP.NET __doPostBack JS function even though I'm not using any controls that require it. Other scripts are being added as WebResource.axd and ScriptResource.axd includes.
At a glance it would seem these scripts constitute the Microsoft AJAX framework used with UpdatePanel etc. My assumption is that they are there to add portal manager functionality when using the page in the Kentico UI. Presumably they are also used with certain built-in web parts.
However, I am only using custom web parts on my live site so all these scripts are doing nothing and are just slowing down my page and causing poor performance testing results.
I've tried hiding the <ajaxToolkit:ToolkitScriptManager /> and <cms:CMSPortalManager /> controls on my master page when rendering the live site, but this causes templates that have a <cms:CMSWebPartZone /> to break.
Does anyone know how to ensure this bloat is removed when not required? Or at the very least cause these scripts to be rendered at the end of the page so they don't interfere with performance too much?
Unfortunately, building sites within Kentico using ASPX and ASPX+Portal Pages will automatically generate additional markup such as __doPostBack, WebResource.axd and ScriptResource.axd.
I wouldn't recommend removing any of the default code in your Masterpage. This will cause things to break (as you've experienced).
However, having this markup in place shouldn't cause a massive issue in page performance. Understandably, this isn't ideal.
What I do to lessen the hit is the following:
Disable the ViewState wherever possible. For example, either at Page Template or Webpart/User control level.
Move the ViewState to the bottom of the page (in Kentico Settings), so the page is less "top heavy".
Ensure you are caching everything you can. For example, site furniture used by your webparts and templates (images/js etc) at IIS level and at Kentico level using their API.
Reading this article from the Kentico documentation provides some more information in greater depth: Optimizing website performance
If you really want "full control" over the HTML rendered, Kentico does allow you to create templates using MVC. But this won't give you the flexibility to modify Page Templates by moving around web parts within the CMS Administration. I presume you have chosen the Portal Page approach for this very reason.
I hope this helps.
In addition to #sbhomra's great answer I have a few questions, suggestions and comments.
How many seconds or milliseconds are you talking about with performance? If you think you'll gain a few milliseconds back, it's not worth your effort to try to rebuild all the functionality. If you're talking a second or two, there about 15 different things you can change within settings and your code to gain it all back. Think about how much code you're going to write, maintain and upgrade just to gain a second or less back?
The WebResource and SciptResource load resources that are compiled into libraries within the website. So if someone created an external library and that library was loading an image that was compiled into it, you'd get that WebResource.axd reference on your site. You'd have to physically remove those libraries from the Kentico instance.
Although I don't recommend it strictly because you lose so much functionality and have so much extra unnecessary code, MVC will give you the control you're looking for.
Using ASP.NET MVC: I am caught in the middle between rendering my views on the server vs in the client (say jquery templates). I do not like the idea of mixing the two as I hear some people say. For example, some have said they will render the initial page (say a list of a bunch of comments) server side, and then when a new comment is added they use client side templating. The requirement to have the same rendering logic in two different areas of your code makes me wonder how people convince themselves it is worth it.
What are the reasons you use to decide which to use where?
How does your argument change when using ASP.NET Web Forms?
One reason that people do that is because they want their sites to get indexed by search engines but also want to have the best user experience, so are writing client code for that. You have to decide what makes sense given the constraints and goals you have. Unfortunately, what makes the most business sense won't always seem to make the most sense from a technical perspective.
One advantage to server-side rendering is that your clients don't have to use javascript in order for your pages to be functional. If you're relying on JQuery templates, you pretty much have to assume that your page won't have any content when rendered without javascript. For some people this is important.
As you say, I would prefer not to use the same rendering logic twice, since you run the risk of letting it get out of sync.
I generally prefer to just leverage partial views to generate most content server-side. Pages with straight HTML tend to render a bit faster than pages that have to be "built" after they've loaded, making the initial load a little speedier.
We've developed an event-based AJAX architecture for our application which allows us to generate a piece of content in response to the result of an action, and essentially send back any number of commands to the client-side code to say "Use the results of this rendered partial view to replace the element with ID 'X'", or "Open a new modal popup dialog with this as the content." This is beneficial because the server-side code can have a lot more control over the results of an AJAX request, without having to write client-side code to handle every contingency for every action.
On the other hand, putting the server in control like this means that the request has to return before the client-side knows what to do. If your view rendering was largely client-based, you could make something "happen" in the UI (like inserting the new comment where it goes) immediately, thereby improving the "perceived speed" of your site. Also, the internet connection is generally the real speed bottleneck of most websites, so just having less data (JSON) to send over the wire can often make things more speedy. So for elements that I want to respond very smoothly to user interaction, I often use client-side rendering.
In the past, search-engine optimization was a big issue here as well, as Jarrett Widman says. But my understanding is that most modern search engines are smart enough to evaluate the initial javascript for pages they visit, and figure out what the page would actually look like after it loads. Google even recommends the use of the "shebang" in your URLs to help them know how to index pages that are dynamically loaded by AJAX.
I have several pages of my web application done. They all use the same master page so they all all look very similar, except of course for the content. It's technically possible to put a larger update panel and have all the pages in one big update panel so that instead of jumping from page to page, the user always stays on the same page and the links trigger __doPostback call-backs to update with the appropriate panel.
What could be the problem(s) with building my site like this?
Well, "pages" provide what is known as the "Service Interface layer" between your business layer and the http aspect of the web application. That is all of the http, session and related aspects are "converted" into regular C# types (string, int, custom types etc.) and the page then calls methods in the business layer using regular C# calling conventions.
So if you have only one update panel in your whole application, what you're effectively saying is that one page (the code behind portion) will have to handle all of the translations between the http "ness" and the business layer. That'll just be a mess from a maintainable perspective and a debugging perspective.
If you're in a team that each of you will be potentially modifying the same code behind. This could be a problem for some source control systems but one or more of you could define the same method name with the same signature and different implementations. That's won't be easy to merge.
From a design perspective, there is no separation of concerns. If you have a menu or hyper link on a business application, it most likely means a difference concern. Not a good design at all.
From a performance perspective you'll be loading all of your systems functionality no matter what function your user is actually doing.
You could still have the user experience such that they have the one page experience and redirect the callback to handlers for the specific areas on concern. But I'd think real hard about the UI and the actual user experience you'll be providing. It's possible that you'll have a clutter of menus and other functionality when you combine everything into one page.
Unless the system you are building a really simple and has no potential to grow beyond what it currently is and provide your users with a one page experience is truly provide value and an improved user experience and wouldn't go down this route.
When you have a hammer, everything looks like a nail.
It really depends on what you are trying to do. Certainly, if each page is very resource-intensive, you may have faster load times if you split them up. I'm all for simplicity, though, and if you have a clean and fast way of keeping users on one page and using AJAX to process data, you should definitely consider it.
It would be impossible to list too many downsides to an AJAX solution, though, without more details about the size and scope of the Web application you are using.
As I mention in an earlier question, I am having trouble with the performance of a web site... Some SQL queries are killing the server. But, as the title of this post mention, I looked at the OutputCache page directive to improve performance of the site.
Although, I came across some questions regarding this directive:
1- If I have a web-user control that declares an OuputCache directive in a page that has one too, which one will "win" ?
2- What's the best pratice regarding the duration ? I'd love to have a sliding window too.
Thanks for your help and please visit http://www.developerit.com
On a request where neither are cached, both the page and the control will be created, and then added to the output cache. If the page is cached, the control will not be created, regardless of whether it's in the cache or not--its markup is contained in the cached copy of the page. If the page is not cached and the control is, the cached markup of the control will be used in the page.
Here's a good article on Output caching: https://web.archive.org/web/20211020111708/https://www.4guysfromrolla.com/articles/121306-1.aspx.
Generally, you seem to be looking at Page and Fragment caching. What you want to do is cache the Page, if you can, as that will give you the best performance benefit. But, if you have regions on the page that must change dynamically per user, eg: you are saying 'Hi {username}' at the top of the page, then you need to look at Fragment caching.
Fragment caching is not as effective as page caching, since the output has to be stitched together from cached info and dynamic info, but it's usually still MUCH better than NO caching!
It's a bit of an art, to tweak the caching depending on what the page does and the load on the database, but it can make a page load many Orders of Magnitude faster than non-cached.
FYI - if the db queries are killing the site you may want to also look at taming them and/or caching their output individually, so that you don't have to keep hitting the database for the same information.
Also understand the 'varyByParam' for caching is pretty useful too - say you have a page in 3 languages, you can cache a page for each language by using the varyByParam, as long as your Url some sort of language component that the varyByParam can pick up.
HTH,
Lance
While working on a quite big web application project, I decided that it could gain a little bit of fresh air by marking some of the pages and controls with the CompilationMode="Never" #Page attribute. So far so good, working as expected and then it happened. A corner case scenario that I am going to explain behaved unexpectedly to put it nicely. This scenario is nested master pages.
A quick teaser before continuing. How deep nesting do you think you could go if you mark the top master page as CompilationMode="Always", and all others beneath it with CompilationMode="Never"? No, its not infinite, or some internal number that ASP.NET has. Its 2. Why? - I have no idea, and I was hoping some of you smart guys could enlighten me?
I have attached a project with 5 nested master pages to demonstrate what I am talking about: Nested Master Pages Web Application Test Project.
Another corner case that is working unexpectedly as well - if you have 5 nested master pages, change the second to have CompilationMode="Always" and all others to have CompilationMode="Never". You will notice that the 3rd master pages is being applied twice!
Please help me understand if something I am doing is incorrect, or confirm the issue.
ASP.NET Runtime Version: 2.0, .NET: 3.5
EDIT:
The project attached has all master pages set to CompilationMode="Never". The ASPX page displays as desired. Change the first master (Site.master) to have CompilationMode="Always" to see what I am talking about.
UPDATE (1/21/2010): Good news: after more investigation, it turns out that this issue was fixed in VS2010. The fix was made post Beta 2, so it will be part of the next public build. I don't have an exact date, but it should not be too far out.
Yes, I seem to recall this coming up before, and indeed some scenarios involving nester master pages and CompilationMode="Never" are broken.
Looking at an old mail thread, I think it only happens for certain combinations. It looks like it’s broken for (where NoCompile means compilationMode=never):
NoCompile Page / Compile Master / NoCompile Master
NoCompile Page / NoCompile Master / Compiled Master
At the time, we did not fix this because the fix was non trivial and the scenario is not common.
Note that when it comes to NoCompile pages, most of the benefit is using it for end node aspx pages, and not master pages. Generally, NoCompile pages run a bit slower than compiled pages. Their benefit is that they don't have a first time compile hit, and they use less memory. Also, they can be fully unloaded under memory pressure. That's why they make good sense when you have a super large number of end point pages (Sharepoint uses them). But on master pages (where most apps only have a small number shared by many pages), that benefit would be minimal. And of course you can't have code in NoCompile pages, which is the main reason that few people use them.
So the quick summary is: you're right, it's a bug! :) And the recommended workaround is to avoid CompilationMode=never for master pages.