Let's consider that I have an asp.net page which will go to the server after a client-side event and will do "some stuff" and show a return value of this process on the UI afterwords.
My question is, if I am working in the same domain, how should I decide between creating a web-service and calling that AND simply raising a post-back and handle this "some stuff" on the aspx page itself?
Under what conditions creating a web-service becomes meaningful to overcome some processes while working in the same domain?
There are no hard-and-fast rules. However, I can offer a few high-level guidelines:
Prefer an .aspx page if the result includes a significant amount of markup (HTML, JS, etc), or where generating the results is simplified by having access to control state from the original page. Keep in mind that the Page object carries a significant amount of overhead with it.
Prefer a web service for queries that can be parameterized and that return structured data
Prefer an HttpHandler for queries with simple parameters that return either simple, full-custom text or binary (such as an image)
I would look at how fast it takes for the post and reload action to occur. It depends on the user's expectations. Most people, if they know they are using a browser, will find that up to two seconds is just about acceptable for the action to occur, and the screen to be reloaded. On the other hand, in one of my jobs, I was using ASP.NET to drive a touchscreen, and this just looked totally wrong, so I refactored the code to use a static web page plus a WebService component.
You also need to take into account the capabilities of the browser. In the example above, I knew that I was only using IE6, so I could afford to write my Javascript code to take advantage of that browser. You may not be so lucky. If you are to use a web service which has client update, you should ensure that you target a version of Javascript and DOM that is supported by all your target browsers.
Related
Folks,
I have some personalized properties on an ASP.Net Web Part that I would like to set via Ajax (for example, the size to which the user has expanded the WebPart using the jQuery Resizable plugin.)
I've read this question and answer, but I don't think it will work. Personalized properties are instance properties. (If they were static, they couldn't be user-scoped, could they?) A WebMethod, which must be static, can't access them.
You might be thinking that I could just use a hidden field and send my values back that way, but that assumes that a postback is done at least once after the values are set. In this case I can't guarantee that: the page in question displays a lot of data but doesn't take any user input other than for configuration.
I seem to recall that some Ajax techniques involve remotely instantiating a page on the server and going through at least part of the page life cycle, but the last time I messed with that was 2006 and I never could get it to work very well. I have the impression that modern Ajax techniques, even for ASP.Net, work in other ways.
So, does anybody have an idea of how this could be managed?
Thanks very much,
Ann L.
Webmethods only have to be static when they are page-based. Create a webservice in your project and stick non-static webmethods in there. You can even enable session state.
I have an application that makes a series of WCF calls that return JSON via JSONP. In turn javascript code will then bind that returned data to HTML controls.
When a bot / spider hits my application, no data would be indexed because javascript would not execute in the bot.
What are some good patterns for dealing with this problem? Ideally I'd like to not have to maintain two sets of data-binding code (one on the server side and one on the client side).
Essentially I need the resulting data to come downstream. Some ideas I had were to.
1) link RSS/ATOM equivalent data
2) a backdoor HTML page
3) an HTML renderer that can execute an ASPX page server side ahead of time and then pass that off to the client
Any guidance would be helpful
Option 3 can I think solve the problem, I will suggest to try this:
Try to see if javascript is enable/bot is browsing the page or not
If it is bot or js is disabled, load the page without web service call and render it with server side code
otherwise go for js version.
I will recommend this if your data is relatively low and the implementation cost is not too high.
Based on my understanding, both of them essentially do the same thing (lets us execute a server side method from JS). Are there any differences?
Also, Ajax Page Methods can be implemented either using JQuery or using ScriptManager. Which one is preferred and why??
**BOUNTY: Adding a bounty to get clear explanation of the question. Thanks **
Fundamentally, Client Callbacks and Ajax Page Methods are doing the same thing. They use an XMLHttpRequest object to send a request (usually asynchronous) to some URL, get the results of that request, then execute a callback method you've provided (callback with a lowercase c), passing the results of the request to your method.
Having said that, there is one big difference between the two approaches:
Page Methods are implemented as static methods on your page. Your page class is just a convenient container for these methods, which could really be hosted anywhere (a web service, a custom HttpHandler, etc.). Since no instance is ever going to be constructed, the client doesn't have to send ViewState data and Asp.Net doesn't have to run through the Page's lifecycle. The flip side is that you don't have access to to your Page class's instance methods and properties. However, in many cases you can work around this by refactoring instance methods into static methods. (See this article for more information.)
Client Callbacks are implemented as instance methods on your page.
They have access to other instance methods on your page, including stuff stored in ViewState. This is convenient, but comes at a price: in order to build the Page instance, the client has to send a relatively large amount of data to the server and has to run through a fair chunk of the page lifecycle. (This article has a nice diagram showing which parts.)
Apart from that, the cost of setting them up varies quite a bit and clients use them differently:
Client Callbacks require a fair amount of idiosyncratic scaffolding
code that is intimately coupled to Asp.Net (as shown in the link above). Given
the much easier alternatives we have now, I'm tempted to say that this technology is obsolete (for new development).
Calling page methods using
ScriptManager requires less setup than Client Callbacks: you just have
to pop a ScriptManager onto your
page, set EnablePageMethods = true,
then access your page methods through the proxy the PageMethods proxy.
Calling page methods using jQuery only requires you to link the jQuery library (and familiarity with jQuery, of course).
I prefer to use jQuery to access page methods because it's independent of the server framework and exposes just the right amount of implementation details, but it's really just a matter of taste. If you go with ScriptManager, its proxy makes the page method calls a little easier on the eyes, which some might consider more important.
I would say there are differences, but would tend to say do it the way you feel more comfortable with.
I have used both approaches, and having jQuery calls from the page is generally faster. I write an ashx handler that does the job the jquery call needs (query the database, process something, etc.) and call it from the page. I wouldn't use an aspx page for a jQuery call, because you're sending a lot of info that you won't need at all. The difference/ benefit of using an Ajax.Net call is that you don't need to build another page to process things, you can use the same page events to do it.
For example, if you need to fill a second drop down list using the selected value on a first one, you could use Ajax.Net to call the SelectedIndexChanged in the page code behind and when it fires go Page_Load, SelectedIndexChanged, Page_PreRender and so on. In the event method you'd query the db and fill the second ddl.
With jQuery that could be a bit different. You make your call to an ashx handler, the handler is just a server method that do the magic and return data in the form you want to have (json, array of strings, xml, etc) and fill the second ddl using javascript.
As I told you before, some people doesn't feel too mcuh comfortable with Client code and tend to do it in the server, but I always say that you need to use the correct tool for the right job, so know your tools and apply them wisely.
If you want to know more about ASP.Net, ASHX handlers and jQuery, you can read a post that I wrote about it.
Hope it helps.-
They are essentially the same. Both:
Setup a webservice for you that the javascript for the control can call.
Provide asynchronous response without involving the page lifecycle.
They are different:
Page Methods simply require that you decorate a static method with an attribute and you are done. The rest of the magic is handled by HTTP Handlers and Modules. Callbacks require you implement a few interfaces and handle the async event handlers yourself. I find them to be a little more of a pain.
Callbacks only work with certain controls. Calling page methods allows you to affect any control through custom javascript. Callbacks have a slight advantage here in that the client-side behavior is already written and fixed. With page methods you have more flexibility, though (the behavior on the client side is determined by you).
There are a few other differences, but these are the basics. My understanding is that client callbacks tend to perform as well as Page methods, but are not used as much becuase they are only available in certain situations, whereas a Page Method is always a valid avenue.
As for the ScriptManager vs. JQuery question, my feeling here is it's about taste more than anything. I like JQuery's syntax and I feel like it performs better, but in the grand scheme of things the most expensive thing is the XmlHttpRequest... after that the execution of the javascript is likely to be insignificant in difference next to that.
I'm investigating an asp.net web application that extensively uses User Controls on each page. Most pages contain around 10-20 user controls. The User controls seem to be the visual representation of the business objects (if that makes sense), although at a finer granularity such as each tab of a tab control having its contents in a user control. The project itself has over 200 user controls (ascx files).
The performance of the application is very poor (and the reason I'm investigating). Each event (such as a click or dropdown selection etc) requires about 5 seconds for the page to load (10 seconds whilst in visual studio). The application has no use of Ajax.
Tracing is painful as the aspx pages themselves have no code in the code-behind as the user controls look after all of this, so tracing a single page requires trace statements in all the user controls that are on that page.
I actually think that having each user control look after its business code and being re-usable is a smart idea, but is an excessive use of user controls going to incur a performance hit? Does this seem like the structure of an asp.net application that was written by someone with a strong WinForms background?
EDIT
thought I should add that i'm not questioning the use of user controls (or even the amount) but simply whether having so many on a page that all accomplish things (each user control connects to the database for example) will generally cause performance problems...For example, if just one user control postsback to do something, what about the processing of all the others, some are visible and some aren't...#David McEwing mentioned that he had 40 optimised user controls performing etc, but if the developer was WinForms based or "not familiar with asp.net", then how are they going to make sure each one is optimised...
EDIT2
After getting a sql statement trace, the same calls for data are being executed 5-6 times per page call for each event as the different user controls require data that is not stored commonly e.g. each user control in the tab (mentioned above) makes the same call to populate an object from the database...I'm really not here to accuse user controls of being the problem (should i delete the question?) as clearly the problem is NOT user controls but with the use of them in this particular case...which I believe is excessive!
10-20 (or even hundreds) User Controls alone is beyond trivial. The presence of the controls themselves, and the idea of encapsulation, is definitely not the source of your problems.
It's impossible to say precisely what the problem is without actually profiling the application of course, but based on my experience I can say this:
What is more likely is the specific implementation of the business logic inside each user control is poor. With postbacks taking as long as you describe, each control probably looks back to your DAL for its own data on each request. This can be mitigated by two things:
Make sure user controls cache all their data on first load and never re-load it unless explicitly instructed to (usually by an event from a lower-level service)
Ensure the controls all use a set of common services which can reuse data. E.g. if two controls need access to the customers list, and they are executing in the context of the same request session, that should only require one customer list lookup.
I'll put myself firmly in the camp of folks that suggest there is no hard limit to a number of user controls that should be used on a page. It sounds like application-wide tracing is in order here instead of page-level tracing. It may very well be that a handful of these controls is causing the problem. Heck, it could be a single control causing all the fuss. However, since it's impossible to make any assumption about the level of resource-usage that the "average" (if there is such a thing) user-control takes up, it's likewise impossible to suggest a limit. Otherwise, we'd be able to come up with similar limits to the number of members to a class or the number of stored procedures to a database.
Now, if we're talking about 20 complex user-controls that are each retrieving their own data with each refresh and each with a bunch of sub-controls using ViewState whether needed or not, then yeah, that's a problem. Still, it has more to do with overall design than with there being too many controls. If, on the other hand, they've created a common user control to act as nothing more than the composite of a label to the left of a textbox (or maybe even every combination of label + user-actionable control) and have sprinkled this control throughout the app, I can imagine that you'd get a bunch of these on a page and I can't see why that would necessarily hurt anything.
I take it that you are not familiar with applications which use so many user controls?
It sounds like you may be jumping to the conclusion that this unfamiliar aspect of the application is the cause of the unfamiliar bad performance. Instead of making assumptions, why not try one of the following profiling tools, and find out:
JetBrains' dotTrace
Red-Gate ANTS
Automated QA's AQTime
These can all do memory and CPU profiling of ASP.NET applications.
I believe that one of the key purposes of UserControls is code reuse. That is, if the same functionality occurs on multiple web pages, then it is better to create a UserControl for it. That not only saves the developer from writing (or copying and pasting) the same code to several web pages, but it also makes maintenance much easier. Any change made to the UserControl is implemented automatically everywhere the UserControl is used. The maintenance developer doesn't have to worry about finding all the different places that the code needs changing.
I'm not sure if a single-use UserControl is as effective. They do encapsulate and segreate the code, which is nice on a busy web page.
Can you ascertain whether your UserControls are reused, or are many of them only used once.
I agree with Saunders about doing some profiling to determine the impact certain things have.
Note that you can get some free stress-testing tools for IIS here: http://support.microsoft.com/kb/840671
I will say, though, that having too many controls is probably not a good thing, IMHO. Without knowing more, I'd tentatively say 20 is too many.
I have to integrate an existing, simple asp.net web forms app including postbacks etc. into another external site with a jQuery load() call., an app that was intended to be integrated through an iframe. I doubt that's possible without a rewrite of the app.
The app is a basic questionnaire that leads the user to a product suggestion at the end.
Does anyone have any pointers to how I could solve this? I guess I will probably have to rewrite the app with web services and dynamic calls to RenderUserControls, I will also need access to the page that calls the load() and write additional jQuery methods to handle the user input... I will probably have to remove all of asp.net's postback calls and rewrite the handling of the user input?
First of all you should note that the load() function, like all ajax, can only work on the same domain. So if the 'external site' is on another domain ajax is the wrong choice.
It does sounds like a lot of hard work, depending on the complexity of the page. Postbacks can occur in many places - image clicks, combo selects, etc. Also, there are hidden fields to worry about, like the View State and Event handler - those have the same names on both pages. You'll have an easier time if the external site has no state and postbacks.
If the pages are relatively simple this can probably be done. It's been my experience that forms don't work well in other forms, so you'll have to remove one of them (probably the loaded page's form), or place them one after the other. As you've mentioned, you'll have to rewrite postbacks, you'll want to serialize the data. You may be able to change this string to fit the names on the original page (if you've changed the name of the viewstate, etc, it's easier to change it back on the serialized string than to mess with IDs), post it to the original page, and load again.
Personally, as much as I like jQuery, and as much as this project sounds interesting (and it is), I'd probably go for a server-side solution. It sounds much easier to create a user control (that may use ajax itself), or to the expose the page's functionality using web services or better, generic handlers.