Best Practices for Passing Data Between Pages - asp.net
The Problem
In the stack that we re-use between projects, we are putting a little bit too much data in the session for passing data between pages. This was good in theory because it prevents tampering, replay attacks, and so on, but it creates as many problems as it solves.
Session loss itself is an issue, although it's mostly handled by implementing Session State Server (or by using SQL Server). More importantly, it's tricky to make the back button work correctly, and it's also extra work to create a situation where a user can, say, open the same screen in three tabs to work on different records.
And that's just the tip of the iceberg.
There are workarounds for most of these issues, but as I grind away, all this friction gives me the feeling that passing data between pages using session is the wrong direction.
What I really want to do here is come up with a best practice that my shop can use all the time for passing data between pages, and then, for new apps, replace key parts of our stack that currently rely on Session.
It would also be nice if the final solution did not result in mountains of boilerplate plumbing code.
Proposed Solutions
Session
As mentioned above, leaning heavily on Session seems like a good idea, but it breaks the back button and causes some other problems.
There may be ways to get around all the problems, but it seems like a lot of extra work.
One thing that's very nice about using session is the fact that tampering is just not an issue. Compared to passing everything via the unencrypted QueryString, you end up writing much less guard code.
Cross-Page Posting
In truth I've barely considered this option. I have a problem with how tightly coupled it makes the pages -- if I start doing PreviousPage.FindControl("SomeTextBox"), that seems like a maintenance problem if I ever want to get to this page from another page that maybe does not have a control called SomeTextBox.
It seems limited in other ways as well. Maybe I want to get to the page via a link, for instance.
QueryString
I'm currently leaning towards this strategy, like in the olden days. But I probably want my QueryString to be encrypted to make it harder to tamper with, and I would like to handle the problem of replay attacks as well.
On 4 guys from Rolla, there's an article about this.
However, it should be possible to create an HttpModule that takes care of all this and removes all the encryption sausage-making from the page. Sure enough, Mads Kristensen has an article where he released one. However, the comments make it sound like it has problems with extremely common scenarios.
Other Options
Of course this is not an exaustive look at the options, but rather the main options I'm considering. This link contains a more complete list. The ones I didn't mention such as Cookies and the Cache not appropriate for the purpose of passing data between pages.
In Closing...
So, how are you handling the problem of passing data between pages? What hidden gotchas did you have to work around, and are there any pre-existing tools around this that solve them all flawlessly? Do you feel like you've got a solution that you're completely happy with?
Thanks in advance!
Update: Just in case I'm not being clear enough, by 'passing data between pages' I'm talking about, for instance, passing a CustomerID key from a CustomerSearch.aspx page to Customers.aspx, where the Customer will be opened and editing can occur.
First, the problems with which you are dealing relate to handling state in a state-less environment. The struggles you are having are not new and it is probably one of the things that makes web development harder than windows development or the development of an executable.
With respect to web development, you have five choices, as far as I'm aware, for handling user-specific state which can all be used in combination with each other. You will find that no one solution works for everything. Instead, you need to determine when to use each solution:
Query string - Query strings are good for passing pointers to data (e.g. primary key values) or state values. Query strings by themselves should not be assumed to be secure even if encrypted because of replay. In addition, some browsers have a limit on the length of the url. However, query strings have some advantages such as that they can be bookmarked and emailed to people and are inherently stateless if not used with anything else.
Cookies - Cookies are good for storing very tiny amounts of information for a particular user. The problem is that cookies also have a size limitation after which it will simply truncate the data so you have to be careful with putting custom data in a cookie. In addition, users can kill cookies or stop their use (although that would prevent use of standard Session as well). Similar to query strings, cookies are better, IMO, for pointers to data than for the data itself unless the data is tiny.
Form data - Form data can take quite a bit of information however at the cost of post times and in some cases reload times. ASP.NET's ViewState uses hidden form variables to maintain information. Passing data between pages using something like ViewState has the advantage of working nicer with the back button but can easily create ginormous pages which slow down the experience for the user. In general, ASP.NET model does not work on cross page posting (although it is possible) but instead works on posts back to the same page and from there navigating to the next page.
Session - Session is good for information that relates to a process with which the user is progressing or for general settings. You can store quite a bit of information into session at the cost of server memory or load times from the databases. Conceptually, Session works by loading the entire wad of data for the user all at once either from memory or from a state server. That means that if you have a very large set of data you probably do not want to put it into session. Session can create some back button problems which must be weighed against what the user is actually trying to accomplish. In general you will find that the back button can be the bane of the web developer.
Database - The last solution (which again can be used in combination with others) is that you store the information in the database in its appropriate schema with a column that indicates the state of the item. For example, if you were handling the creation of an order, you could store the order in the Order table with a "state" column that determines whether it was a real order or not. You would store the order identifier in the query string or session. The web site would continue to write data into the table to update the various parts and child items until eventually the user is able to declare that they are done and the order's state is marked as being a real order. This can complicate reports and queries in that they all need to differentiate "real" items from ones that are in process.
One of the items mentioned in your later link was Application Cache. I wouldn't consider this to be user-specific since it is application wide. (It can obviously be shoe-horned into being user-specific but I wouldn't recommend that either). I've never played with storing data in the HttpContext outside of passing it to a handler or module but I'd be skeptical that it was any different than the above mentioned solutions.
In general, there is no one solution to rule them all. The best approach is to assume on each page that the user could have navigated to that page from anywhere (as opposed to assuming they got there by using a link on another page). If you do that, back button issues become easier to handle (although still a pain). In my development, I use the first four extensively and on occasion resort to the last solution when the need calls for it.
Alright, so I want to preface my answer with this; Thomas clearly has the most accurate and comprehensive answer so far for people starting fresh. This answer isn't in the same vein at all. My answer is coming from a "business developer's" standpoint. As we all know too well; sometimes it's just not feasible to spend money re-writing something that already exists and "works"... at least not all in one shot. Sometimes it's best to implement a solution which will let you migrate to a better alternative over time.
The only thing I'd say Thomas is missing is; client-side javascript state. Where I work we've found customers are coming to expect "Web 2.0"-type applications more and more. We've also found these sorts of applications typically result in much higher user satisfaction. With a little practice, and the help of some really great javascript libraries like jQuery (we've even started using GWT and found it to be AWESOME) communicating with JSON-based REST services implemented in WCF can be trivial. This approach also provides a very nice way to start moving towards a SOA-based architecture, and clean separation of UI and business logic.
But I digress.
It sounds to me as though you already have an application, and you've already stretched the limits of ASP.NET's built-in session state management. So... here's my suggestion (assuming you've already tried ASP.NET's out-of-process session management, which scales signifigantly better than the in-process/on-box session management, and it sounds like you have because you mentioned it); NCache.
NCache provides you with a "drop-in" replacement for ASP.NET's session management options. It's super easy to implement, and could "band-aid" your application more than well enough to get you through - without any significant investment in refactoring your existing codebase immediately.
You can use the extra time and money to start reducing your technical debt by focusing new development on things with immediate business-value - using a new approach (such as any of the alternatives offered in the other answers, or mine).
Just my thoughts.
Several months later, I thought I would update this question with the technique I ended up going with, since it has worked out so well.
After playing with more involved session state handling (which resulted in a lot of broken back buttons and so on) I ended up rolling my own code to handle encrypted QueryStrings. It's been a huge win -- all of my problem scenarios (back button, multiple tabs open at the same time, lost session state, etc) are solved and the complexity is minimal since the usage is very familiar.
This is still not a magic bullet for everything but I think it's good for about 90% of the scenarios you run into.
Details
I built a class called CorePage that inherits from Page. It has methods called SecureRequest and SecureRedirect.
So you might call:
SecureRedirect(String.Format("Orders.aspx?ClientID={0}&OrderID={1}, ClientID, OrderID)
CorePage parses out the QueryString and encrypts it into a QueryString variable called CoreSecure. So the actual request looks like this:
Orders.aspx?CoreSecure=1IHXaPzUCYrdmWPkkkuThEes%2fIs4l6grKaznFGAeDDI%3d
If available, the currently logged in UserID is added to the encryption key, so replay attacks are not as much of a problem.
From there, you can call:
X = SecureRequest("ClientID")
Conclusion
Everything works seamlessly, using familiar syntax.
Over the last several months I've also adapted this code to work with edge cases, such as hyperlinks that trigger a download - sometimes you need to generate a hyperlink on the client that has a secure QueryString. That works really well.
Let me know if you would like to see this code and I will put it up somewhere.
One last thought: it's weird to accept my own answer over some of the very thoughtful posts other people put on here, but this really does seem to be the ultimate answer to my problem. Thanks to everyone who helped get me there.
After going through all the above scenarios and answers and this link Data pasing methods My final advice would be :
COOKIES for:
ENCRYPT[userId's]
ENCRYPT[productId]
ENCRYPT[xyzIds..]
ENCRYPT[etc..]
DATABASE for:
datasets BY COOKIE ID
datatables BY COOKIE ID
all other large chunks BY COOKIE ID
My advise also depends on the below statistics and this link details Data pasing methods :
I would never do this. I have never had any issues storing all session data in the database, loading it based on the users cookie. It's a session as far as anything is concerned, but I maintain control over it. Don't give up control of your session data to your web server...
With a little work, you can support sub sessions, and allow multi-tasking in different tabs/windows.
As a starting point, I find using the critical data elements, such as a Customer ID, best put into the query string for processing. You can easily track/filter bad data coming off of these elements, and it also allows for some integration with e-mail or other related sites/applications.
In a previous application, the only way to view an employee or a request record involving them was to log into the application, do a search for the employee or do a search for recent records to find the record in question. This became problematic and a big time sink when somebody from a related department needed to do a simple view on records for auditing purposes.
In the rewrite, I made both the employee Id, and request Ids available through a basic URL of "ViewEmployee.aspx?Id=XXX" and "ViewRequest.aspx?Id=XXX". The application was setup to A) filter out bad Ids and B) authenticate and authorize the user before allowing them to these pages. What this allowed the primarily application users to do was to send simple e-mails to the auditors with a URL in the e-mail. When they were in a big hurry, they were in their bulk processing time, they were able to simply click down a list of URLs and do the appropriate processing.
Other session related data, such as modification dates and maintaining the "state" of the user's interaction with the application gets a little more complex, but hopefully this provides a starting poing for you.
Related
How can I tell the difference between a post from a browser, and someone trying to post programmatically
Is there a way to determine if the request coming to a handler (lets assume the handler responds to get and post) is being performed by a real browser versus a programmatic client? I already know that it is easy to spoof things like the User Agent and the Referrer, but are there other headers that are more difficult to spoof? Maybe headers that are not commonly available in classes like .net's HttpWebRequest? The other path that I looked at is maybe using the Encrypted View State to send a value to the browser that gets validated on the server side, though couldn't that value simply be scraped from the previous response and added as a post parameter to the next request? Any help would be much appreciated, Cheers,
There is no easy way to differentiate because in the end, a post programitically looks the same to the server as a post by a user from the browser. As mentioned, captcha's can be used to control posting but are not perfect (as it is very hard but not impossible for a computer to solve them). They also can annoy users. Another route is only allowing authenticated users to post, but this can also still be done programatically. If you want to get a good feel for how people are going to try to abuse your site, then you may want to look at http://seleniumhq.org/ This is very similar to the famous Halting Problem in computer science. See some more on the proof, and Alan Turing here: http://webcache.googleusercontent.com/search?q=cache:HZ7CMq6XAGwJ:www-inst.eecs.berkeley.edu/~cs70/fa06/lectures/computability/lec30.ps+alan+turing+infinite+loop+compiler&cd=1&hl=en&ct=clnk&gl=us
The most common way is using captcha's. Of course captcha's have their own issues (users don't really care for them) but they do make it much more difficult to programatically post data. Doesn't really help with GETs though you can force them to solve a captcha before delivering content.
Many ways to do this, like dynamically generated XHR requests that can only be made with human tasks. Here's a great article on NP-Hard problems. I can see a huge possibility here: http://www.i-programmer.info/news/112-theory/3896-classic-nintendo-games-are-np-hard.html One way: You could use some tricky JS to handle tokens on click. So your server issues token-id's to elements on the page during the backend render phase. Log these in a database or data file. Then, when users click around and submit, you can compare the id's sent via the onclick() function. There's plenty of ways around this, but you could apply some heuristics to determine if posts are too fast to be a human or not, that is, even if they scripted the hijacking of the token-ids and auto submitted, you could check that the time between click events appears automated. Signed up for a twitter account lately? They use passive human detection that while not 100% foolproof, it is slower and more difficult to break. Many if not all of the spam accounts there had to be human opened. Another Way: http://areyouahuman.com/ As long as you are using encrypted methods verifying humanity without crappy CAPTCHA is possible.I mean, don't ignore your headers either. These are complimentary ways. The key is to have enough complexity to make for an NP-Complete problem in terms of number of ways to solve the total number of problems is extraordinary. http://en.wikipedia.org/wiki/NP-complete When the day comes when AI can solve multiple complex Human problems on their own, we will have other things to worry about than request tampering. http://louisville.academia.edu/RomanYampolskiy/Papers/1467394/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial Another company doing interesting research is http://www.vouchsafe.com/play-games they actually use games designed to trick the RTT into training the RTT how to be more solvable by only humans!
Dynamic form creation in asp.net c#
So, I need some input refactoring an asp.net (c#) application that is basically a framework for creating dynamic forms (any forms). From a high level point of view, there is a table that has the forms, and then there is a table that has all the form fields, where it is one to many between the two. There is a validation table, where each field can have multiple types of validation, and it is a one to many from the form fields table to the validation table. So the issue is that this application has been sold as the be-all-end-all customizable solution to all the clients. So, the idea is whatever form they want, we can build it jsut using DB configurations. The thing is, that is not always possible, because there is complex relationship between the fields, and complex relationship between the forms themselves. Also, there is only once codebase, and this is for multiple clients - all of whom host it on their own. There is very specific logic for each of the clients, and they are ALL in the same codebase, with no real separation. Sometimes it was too difficult to make it generic, so there are instances where it has hard coded logic (as in if formID = XXX then do _). You can also have nested forms, as in, one set of fields on its own within each form. So usually, when one client requests a change, we make the change and deploy it to that client - but then another client requests a different change, and we make the change and deploy it for THAT client, but the change from the earlier client breaks it, and its a headache trying to debug, because EVERYTHING is dynamic. There is no way we can rollback the earlier change, because then the other client would be screwed. Its not done in a real 3-tier architecture - its a web site with references to a DB class, and a class library. There is business logic in the web site itself, in the class library, and the database stored procs (Validation is done in the stored procs). I've been put in charge of re-organizing the whole thing, and these are my thoughts/questions: I think this is a bad model in general, because one of the things I heard one of the developers say is that anytime any client makes a change, we should deploy to everybody - but that is not realistic, if we have say 20 clients - there will need to be regression testing on EVERYTHING, since we don't know the impact... There are about 100 forms in total, and their is some similarity in them (not much). But I think the idea that a dynamic engine can solve ALL form requests was not realistic as well. Clients come up with the most weird requests. For example, they have this engine doing a regular data entry form AND a search form. There is a lot of preserving state between pages, and it is all done using session variables, which is ok, except that it is not really tracked, and so sessions from the same user keep getting overwritten, and I think sessions should be got rid of. Should I really just rewrite the whole thing? This app is about 3 years old, and there has been lots of testing and things done, and serious business logic implemented, so I hate to get rid of all that (joel's advice). But its really a mess of a sphagetti code, and everything takes forever to do, and things break all the time because of minor changes. I've been reading Martin Fowlers "Refactoring" and Michael Feathers "working effectively with legacy code" - and they are good, but I feel they were written for an application that was 'slightly' better architected, where it is still a 3-tiered architecture, and there is 'some' resemblance of logic.. Thoughts/input anyone? Oh, and "Help!"
My current project sounds like almost exactly the same product you're describing. Fortunately, I learned most of my hardest lessons on a former product, and so I was able to start my current project with a clean slate. You should probably read through my answer to this question, which describes my experiences, and the lessons I learned. The main thing to focus on is the idea that you are building a product. If you can't find a way to implement a particular feature using your current product feature set, you need to spend some additional time thinking about how you could turn this custom one-off feature into a configurable feature that can benefit all (or at least many) of your clients. So: If you're referring to the model of being able to create a fully customizable form that makes client-specific code almost unnecessary, that model is perfectly valid and I have a maintainable working product with real, paying clients that can prove it. Regression testing is performed on specific features and configuration combinations, rather than a specific client implementation. The key pieces that make this possible are: An administrative interface that is effective at disallowing problematic combinations of configuration options. A rules engine that allows certain actions in the system to invoke customizable triggers and cause other actions to happen. An Integration framework that allows data to be pulled from a variety of sources and pushed to a variety of sources in a configurable manner. The option to inject custom code as a plugin when absolutely necessary. Yes, clients come up with weird requests. It's usually worthwhile to suggest alternative solutions that will still solve the client's problem while still allowing your product to be robust and configurable for other clients. Sometimes you just have to push back. Other times you'll have to do what they say, but use wise architectural practices to minimize the impact this could have on other client code. Minimize use of the session to track state. Each page should have enough information on it to track the current page's state. Information that needs to persist even if the user clicks "Back" and starts doing something else should be stored in a database. I have found it useful, however, to keep a sort of breadcrumb tree on the session, to track how users got to a specific place and where to take them back to when they finish. But the ID of the node they're actually on currently needs to be persisted on a page-by-page basis, and sent back with each request, so weird things don't happen when the user is browsing to different pages in different tabs. Use incremental refactoring. You may end up re-writing the whole thing twice by the time you're done, or you may never really "finish" the refactoring. But in the meantime, everything will still work, and you'll have new features every so often. As a rule, rewriting the whole thing will take you several times as long as you think it will, so don't try to take the whole thing in a single bite.
I have a number of similar apps for building dynamic forms that I support. There's a whole lot of things you could/could not do & you're right to think hard before throwing away 3 years of testing/development. My input for you to consider is to implement a plug-in architecture on top of what you're got. Any custom code for a form goes in the plug-in & the name of this plug-in is stored with the form. When you generate a form, the correct plug-in is called to enhance the base functionality. that way you get to move all the custom code out of the existing library. It should also mean less breaking changes, each plug-in only affects the form it's attached to. From that point it'll be easy to refactor the core engine as it's common functionality across all clients & forms.
Since your application seems to have become a big ball of mud, a complete (or an almost complete rewrite) might make sense. You should also take into account new technologies like document-oriented databases (couchDB, MongoDB) Most of the form definitions could probably fit pretty well in document-oriented databases. For exemple: To define a customer form, you could use a document that looks like: {Type:"FormDefinition", EntityType: "Customer", Fields: [ {FieldName:"CustomerName", FieldType:"String", Validations:[ {ValidationType:"Required"}, {ValidationType:"StringLength", Minimum:15, Maximum:50}, ]}, ... {FieldName:"CustomerType", FieldType:"Dropdown", PossibleValues: ["Standard", "Valued", "Gold"], DefaultValue: ["Standard"] Validations:[ {ValidationType:"Required"}, { ValidationType:"Custom", ValidationClass:"MySystem.CustomerName.CustomValidations.CustomerStatus" } ]}, ... ] }; With this kind of document to define your forms, you could easily add forms and validations which are customer specific. You could easily add subforms using a fieldtype of SubForm or whatever. You could define FieldTypes for all common types of fields like e-mail, phone numbers, address, etc. namespace System.CustomerName.CustomValidations { class CustomerStatus: IValidator { private FormContext form; private List<ValidationErrors> validationErrors; CustomerStatus(FormContext fc) { this.validationErrors = new List<ValidationErrors>(); this.form = fc; } public List<ValidationErrors> Validate() { if (this.formContext.Fields["CustomerType"] == "Gold" && Int.Parse(this.form.Fields["OrderCount"]) < 10) { this.validationErrors.Add(new ValidationError("A gold customer must have at least 10 orders")) } if (this.formContext.Fields["CustomerType"] == "Valued" && Int.Parse(this.form.Fields["OrderCount"]) < 5) { this.validationErrors.Add(new ValidationError("A valued customer must have at least 5 orders")) } return this.validationErrors; } } } A record of a document with that definition could look like this: {Type:"Record", EntityType: "Customer", Fields: [ {FieldName:"CustomerName", Value:"ABC Corp.", {FieldName:"CustomerType", Value:"Gold", ... ] }; Sure, this solution is a lot of work, but if/when realized it could be really easy to create/update/customize forms.
This is a common but (IMO) somewhat naive design approach. "Instead of solving the customer's problem, let's build a tool to let them solve their own problems!". But the reality is, that generally customers want YOU to solve their ACTUAL problems. So build things that solve their problems. If you can architect it in a way that allows you to reuse some parts for different customers, fine. But that is generally what the frameworks have done for you already - work out the common features that applications need and make them available in neat packages.
Filtering Data in ASP.NET Web Services
I've been using this site for quite a while, usually being able to sort out my questions by browsing through the questions and following tags. However, I've recently come across a question that is rather hard to lookup amongst the great number of questions asked - a question I hope some of you might be able to share your opinion on. As my problem is a bit hard to fit into a single line, going in the title, I'll try to give a bit more details on the problem I've encountered. So, as the title says I need to filter, or limit, some of the response data my standard ASP.NET Soap-based Web service returns on invoking various web methods. The web service is used to return data used by other systems (a data repository more or less), where the client today is able to specify a few parameters on how the data should be filtered and in return a full-set of data back. Well, easy enough I thought, just put additional filtering options on the existing web methods which needs a bit more filtered applied, make adjustments on the server-side and we are all set to go - well, unfortunately it turned out to be a bit more tricky then this. The problem I am facing is that I'm working on a web service running in a production environment, which needs to be extended in such that additional filters can be applied to existing web method being invoked w/o affecting the calls already being made by other systems used by the customer using their client stubs. This is where I am a bit troubled, since I can't seem to find a "right solution" on extending the current web service. Today, the filter is send as a custom data structure which holds information on which data should filtered, but I am not sure if I can simply just add more information to this data structure w/o breaking code at the clients? One of my co-workers suggested that I could implement a solution where I would extend the web.config on the server-side to hold a section with details on which data should be excluded (filtered out), but I don't find this to be a viable solution long-sighted - and I don't trust customers with such an option since this is likely to go wrong at some point. So the solution I am looking for is a way that I can apply a "second filter" to the data I am requesting from the client so instead of getting a full-set of data back it should only give a fraction, it implemented in such that the filter can be easily modified and it must not affect the current client calls. Any suggestions on how I should approach this problem? Thanks! Kind regards, E.
A pretty common practice is to create another instance of the application OR use part of the url to signify the version of the endpoint they are connecting to, perhaps the virtual directory is the date. That way old calls will go to the old API and new calls will come in on the new API. http://api.example.com/dostuff vs http://api.example.com/6-7-2011/dostuff
LINQ/EDM cache's efficiency in a webapp
I was just learning how to use the LINQ/EDM combination to retrieve and update a simple user-thread-and-comment webapp as part of evaluating it. When I turn on SQL Profiler, I rarely see a SQL statement executed by my app. I'm starting to really like how well it keeps things cached, coz as soon as I add new data, it magically updates itself while I'm blinking. But is that something I should be scared of? My concern is when I use this to make a webapp with some traffic (whatever hit count that reaches this approach's par). Should I keep a single context object at app-level, so that different sessions can benefit from each other's cache entries? Or should I do the create-and-release on each page submission? I know this sounds like an open-ended question, but I really have that as a question: how does the entity cache its data when using LINQ?
On the ObjectContext question you should use a lifetime of per-page-request cycle or smaller. It's designed for a unit of work and not for the application lifetime. Search SO for "ObjectContext lifetime" or "DataContext lifetime" and you'll see this is a common question.
Design Decision - Javascript array or http handler
I'm building a Web Page that allows the user to pick a color and size. Once they have these selected I need to perform a lookup to see if inventory exists or not and update some UI elements based on this. I was thinking that putting all the single product data into multidimensional JavaScript array (there is only 10-50 records for any page instance) and writing some client side routines around that, would be the way to go for two reasons. One because it keeps the UI fast and two it minimizes callbacks to the server. What i'm worried about with this solution is code smell. As an alternative i'm thinking about using a more AJAX purist approach of using HTTP handlers and JSON, or perhaps a hybrid with a bit of both. My question is what are your thoughts as to the best solution to this problem using the ASP.Net 2.0 stack? [Edit] I also should mention that this page will be running in a SharePoint environment.
Assuming the data is static, I would vote option #1. Storing and retrieving data elements in a JavaScript array is relatively foolproof and entirely within your control. Calling back to the server introduces a lot of possible failure points. Besides, I think keeping the data in-memory within the page will require less code overall and be more readable to anyone with a more than rudimentary understanding of JavaScript.
i'm against Ajax for such tasks, and vote (and implemented) the first option. As far as I understand, you won't create Code smells if the JS part is being written by your server-side. From a user point-of-view, Ajax is an experience-killer for wireless browsing, since any little glitch or mis-service will fail or simply lengthen the interaction by factors of 20(!). I've implemented even more records than yours in my site, and the users love it. Since some of my users use internet-caffee, or dubious hotel wifi, it wouldn't work otherwise. Besides, Ajax makes your server-vs-client interaction code much more complex, IMO, which is the trickiest part in web programming.
I would go with your second option by far. As long as the AJAX call isn't performing a long running process for this case, it should be pretty fast. The application I work on does lots with AJAX and HttpHandler, and our calls execute fast. Just ensure you are minimizing the size of your JSON returned in the response.
Go with your second option. If there are that few items involved, the AJAX call should perform fairly well. You'll keep your code off the client side, hopefully prevent any browser based issues that the client side scripting might have caused, and have a cleaner application. EDIT Also consider that client side script can be modified by the user. If there's no other validation occuring to the user's selection, this could allow them to configure a product that is out of stock.