asp.net Post/Request/Get question about validation - asp.net

I'm trying to figure out a logical and quick way to implement the "PRG" design style in a small site I'm doing, and I'm finding an issue I can't think of a good way to solve.
I have a form. It has 2 fields (first and last name). When the user submits the form, I check to make sure that they have data in them before I save it to the database. If they do not, then I show them a nice little error message and leave what little they have entered still in the form.
However, in order to correctly implement PRG, as I understand it, I would have to validate on postback, determine there is an error, then redirect the user to the same page but via get, not post, which would then eliminate any kind of scary message "you are about to resubmit the contents of a form to the server, which can have unintended consequences..." etc if the user reloaded the page.
How can I redirect the user back to the same page and still save their already entered values in the form fields?
I mean, I could pass all the values in the Querystring, but for complex forms this would become very difficult and messy. I could store the values in Session, but that would still be messy, and a good way to bring a webserver to its knees on a busy site.
What do you, my fellow web programmers, do?

As you note, by redirecting, you will lose the data they have entered unless you go through some QueryString gyrations. You are really defeating one of the nice features of asp.net by doing this - the form values are all preserved for you, so that if the page needs to be redisplayed after a post with errors, most of the work is done for you. Why do you want to use a PRG design? What benefit do you expect to get that standard asp.net form handling doesn't give you?

Related

Webscraping a tricky asp.net page

The overall goal is to perform a search on the following webpage http://www.cma-cgm.com/eBusiness/Tracking/Default.aspx with a container value of CMAU1173561. I have tried two approaches, the php extension cURL and python's mechanized. The php approached involves a performing a POST submit using the input fields found on the page (NOTE: These are really ugly on the asp.net page). The returned page does not contain any of the search results. The second approaches involves using python's mechanize module. In this approach I load the page, select the form, then change the text field ctl00$ContentPlaceBody$TextSearch to the container value. When I load the response again no search results.
I am at a really dead end. Any help would be appreciate because as it stands my next step is to become a asp.net expertm which i perfer not to.
The source of that page is pretty scary (giant viewstate, tables all over the place, inline CSS, styles that look like they were copied from Word).
Regardless...an ASP.Net form still passes the same raw data to the server as any other form (though it is abstracted to the developer).
It's very possible that you are missing the cookies which go along with the request. If the search page (or any piece of the site) uses session state, the ASP.Net session cookie must be included in the request. You will be able to tell it from its name (contains "asp.net" and "session").
I assume that you have used a tool like Firebug or Chrome to view the complete outgoing request when the page is submitted. From my quick test, it looks like the request may be performed with a GET, not a POST. I submitted a form, looked at the request, and pasted the URL into a new browser window.
Example: http://www.cma-cgm.com/eBusiness/Tracking/Default.aspx?ContNum=CMAU1173561&T=57201202648
This may be all you need to do.

ASP.net Page Persistence Regarding Saving State

Currently, I have an asp.net web application that links to another page. The enduser clicks on a link which response.redirects to a validation page. This works correctly they finish with the validation page and this response.redirects them back to the initial page that they started on. The specific problem is that when the user is brought back to the initial page any work that they had previously completed is now gone (aka filling in textboxes/dropdowns etc.). I have been reviewing the best way of making sure this doesn't continue to occur and everything seems to be pointing to saving the view state of the page prior to redirecting to the validation page, and then reloading this view state upon coming back to the initial page. Although, I fell like using response.redirects will not allow this to occur. Now if the end user was just clicking the back button then this would work. Basically, my problem is keeping the data that my enduser input present on the initial page. Any thoughts/ideas/suggestions will be greatly appreciated. Please go easy as it is my first post here. Thanks.
I am not sure whether or not this will solve your issues but long time ago there was an idea to store the ViewState on the server and restore it on demand.
http://www.codeproject.com/KB/applications/persistentstatepage.aspx
This came at a price of turning of the validation. I remember I tweaked it a little bit:
http://netpl.blogspot.com/2007/11/persistentstatepage-with-event.html
I hope you'll find it useful.
I will suggest looking at this article: 9 ways to persist user stat
Which state choice is really up to you and what your application requires. Its a little old (2003) but a good guide to user state.

Abusing HTTP POST

Currently reading Bloch's Effective Java (2nd Edition) and he makes a point to state, in bold, that overusing POSTs in web applications is inherently bad. Unfortunately, he doesn't specify why.
This startled me, because when I do any web development, all I ever use are POSTs! I have always steered clear of GETs for security reasons and because it felt more professional (long, unsightly URLs always bother me for some reason).
Are there performance differentials between GET and POST? Can anyone elaborate on why overusing POSTs is bad, and why? My understanding - and preliminary searches - seem to all indicate that these two are handles very similarly by the web server. Thanks in advance!
You should use HTTP as it's supposed to be used.
GET should be used for idempotent, read queries (i.e. view an item, search for a product, etc.).
POST should be used for create, delete or update requests (i.e. delete an item, update a profile, etc.)
GET allows refreshing the page, bookmark it, send the URL to someone. POST doesn't allow that. A useful pattern is post/redirect/get (AKA redirect after post).
Note that, except for long search forms, GET URLs should be short. They should usually look like http://www.foo.com/app/product/view?productId=1245, or even http://www.foo.com/app/product/view/1245
You should almost always use GET when requesting content. Only use POST when you are either:
Transmitting sensitive information which should not appear in the URL bar, or
Changing the state on the server (adding/changing/deleting stuff, altough recently some web applications use POST to change, PUT to add and DELETE to delete.)
Here's the difference: If you want to give the link to the page to a friend, or save it somewhere, or even only add it to your bookmarks, you need the full URL of the page. Just like your address bar should say http://stackoverflow.com/questions/7810876/abusing-http-post at the moment. You can Ctrl-C that. You can save that. Enter that link again, you're back at this page.
Now when you use any action other than GET, there is simply no URL to copy. It's like your browser would say you are at http://stackoverflow.com/question. You can't copy that. You can't bookmark that. Besides, if you would try to reload this page, your browser would ask you whether you want to send the data again, which is rather confusing for the non-tech-savy users of your page. And annoying for the entire rest.
However, you should use POST/PUT when transferring data. URL's can only be so long. You can't transmit an entire blog post in an URL. Also, if you reload such a page, You'll almost certainly double-post, because the above described message does not appear.
GET and POST are very different. Choose the right one for the job.
If you are using POST for security reasons, I might drop a mention of other security factors here. You need to ensure that you send the data from a form submit in encrypted form even if you are using POST.
As for the difference between GET and POST, it is as simple as GET is used to send a GET request. So, you would want to get data from a page and act upon it and that is the end of everything.
POST on the other hand, is used to POST data to the application. I am talking about transactions here (complete create, update or delete operations).
If you have a sensitive application that takes, say and ID to delete a user. You would not want to use GET for it because in that case, a witty user may raise mayhem simply changing the ID at the end of the URL and deleting all random uses.
POST allows more data and can be hacked to send streams of files as well. GET has a limited size though.
There is hardly any tradeoff in using GET or POST.

Standard way to persist data between requests in ASP.NET-MVC

What is the most standard or best way to persist data between requests?
Should I use cookies or session variables? I'm interested in keeping data like sort order, sort column, and page number (for paginiation).
I'm coming from a webforms background so normally this type of thing was automatically handled for me in the viewstate of the controls I was using.
update
I like the querystring idea, for searching and more meaningful URLs; however, I'm working on an "index/list" view, which consists of a View with header, and "control" options, like DDLs for filtering and a partial view that renders the table of data.
The DDLs use a $.load() to call an ActionResult on the controller, which returns the partial view, passing parameters there in the querystring, but since these are ajax requests the main page url of the user's browser does not get updated.
Is there a best-practice for taking querystrings off the main-page URL and using them in ajax requests to other ActionResults?
If you want it to survive only through one request/redirect TempData is your friend.
However, for things like your pagination, URL is the best method, for the ability to share links alone.
A standard way is to pass those sort of things via URL Query Parameters. You can modify your routing to expect certain URL variables. That way the pages become more search engine friendly as well.
It depends on how permanent you want the information to be:
Things like the page number should indeed be in the URL (as others have pointed out) - this helps with bookmarking, etc, but remember that if you add more content to the list, then that bookmarked result set will not always be what the user wanted...
If you're happy for these values to be lost when a session times out (by default around 20 minutes), then put them in Session.
If you think that sessions are going to timeout before the next request, or you want to save it across visits then you should be storing them in either cookies, or a profile (potentially allowing "Anonymous" profiles, which work with the users cookies, so they would lose them across machines).
Personally, I'd think very carefully about putting sort order and columns in the URL if you do you could actually end up really confusing search engines:
Lots of pages with very similar content (page 1, sorted by date desc, page 1 sorted by date asc, etc) - search engines don't like duplicate content, and nor should you as Google (for instance) will only show two pages from your site in a default result set, you want them to be valid, not duplicates.
Search engines will spend lots more time crawling your site, and potentially give up - If on every page they find links to "Sort by this column", they will attempt to follow them, resulting in more work on the server, higher bandwidth use, etc.
These can be mitigated through the use of a Robots.txt file denying access to sorted versions of the page, but if this is generated almost dynamically that will be very complex to maintain going forward.
In response to your update, a nice way to achieve that for pages would be to have links to "Previous" and "Next" pages of results (or better yet, a list of all pages in the list), output on the page, with the page numbers, that you then hide with JavaScript.
This way users should see your nice, AJAXy behaviour, and search engines (and users without JavaScript - mobile, or those using older screen readers for instance) will still be able to get access to all your pages - this will help your pages to degrade gracefully, or use "Progressive Enhancement".
Things that were previously in viewstate should probably be put back in the clients hands via either hidden fields or cookies.
Session is "too" easy. In a dev environment it works great, pretty much no matter what you put in it. In production scalability and persistence become a problem. In-process session is likely to disappear unexpectedly if you have crashing bug in your site, and requires server affinity when load balancing. Out-of process session fixes the durability and affinity issues, but can still be a performance bottle neck if too much stuff is put in session. A VERY common problem is that each page will put 1 or 2 items into session but never take them out again when they are done. And even if a page removes it session data when it is no longer needed, the data can still get orphaned if a user starts a process and never completes it.
Cookies is a fast and simple way to persist data between requests, and you can also make them live only for a limited time depending on your needs.
Session are easiest.

IE not offering to save password of ASP.NET form

Sometimes Microsoft does something so stunningly dumb that it makes my head hurt. Help me find out it's really not the case ... please!
I've got an issue with the login page of an ASP.NET (3.5) site I'm developing whereby IE (7 or 8 ... can't bear to open 6) doesn't offer to save the password when a user logs in. I've checked other browsers and Firefox, Chrome and Safari all offer to save the password just fine. I've also confirmed that IE password saving on my test boxes is is working OK on other sites and for e.g. Google etc it works fine.
The searching I've done has turned up very little, but what little it did turn up seems to suggest that IE won't offer to save a password if the form on the page contains more than two text controls. That's the case with my form which also has controls to allow a user to register. And when I remove these additional controls, IE magically prompts to save password, so this does seem to be true.
Now ... if ASP.NET would allow me to have multiple forms, all would be well and I would be able to separate out the two functions into standalone forms and IE would prompt to save passwords. But, ASP.NET doesn't allow me to do this as it only allows a single form. I could fudge a non runat=server form in there and try to do this, but guess what? Because my page uses a MasterPage, any form tag I add is automatically stripped out, even if it's a non runat=server form.
So, I don't see any way around this without fundamentally changing what I was trying to achieve. It looks like I have to explain to my users that they won't be prompted to have their passwords saved if they use IE (a Microsoft product) because I developed my site with ASP.NET (err ... a Microsoft product).
If this is so, I just can't get over how head-smackingly ridiculous this is. If anyone can offer any ideas on how to get around it, can tell me I've got it all wrong and am a big, stupid idiot myself, or just wants to confirm that it's not just me that thinks this is monumentously dumb, then please, please do so.
Just for the record, I really don't want to (and don't see why I should have to) compromise my design and split my pages in two (which will result in a worse experience for the user).
#Chris That's what I went for in the end.
So for the benefit of anyone else, I still have my activation controls in a runat=server form and process these in the code for that page. Then I have a second, standard HTML form with HTML input textfields that posts to a different .NET page. This deals with the users login. I pick up the values in this page via Request.Form and deal with the login from here.
Upsides:
It all works and users get their logins remembered as they would expect to.
Downsides:
I lost the ability to use a MasterPage (as I need two forms in the page) so I effectively have had to duplicate the template - I don't like this much.
If the users login is invalid or causes some kind of error, I have to redirect to the initial page and pass it a flag to get it to show a relevant error message - I don't like this much either.
Like I say, though, it just works and in this case that's what was most important. Thanks for your input.

Resources