What is the most standard or best way to persist data between requests?
Should I use cookies or session variables? I'm interested in keeping data like sort order, sort column, and page number (for paginiation).
I'm coming from a webforms background so normally this type of thing was automatically handled for me in the viewstate of the controls I was using.
update
I like the querystring idea, for searching and more meaningful URLs; however, I'm working on an "index/list" view, which consists of a View with header, and "control" options, like DDLs for filtering and a partial view that renders the table of data.
The DDLs use a $.load() to call an ActionResult on the controller, which returns the partial view, passing parameters there in the querystring, but since these are ajax requests the main page url of the user's browser does not get updated.
Is there a best-practice for taking querystrings off the main-page URL and using them in ajax requests to other ActionResults?
If you want it to survive only through one request/redirect TempData is your friend.
However, for things like your pagination, URL is the best method, for the ability to share links alone.
A standard way is to pass those sort of things via URL Query Parameters. You can modify your routing to expect certain URL variables. That way the pages become more search engine friendly as well.
It depends on how permanent you want the information to be:
Things like the page number should indeed be in the URL (as others have pointed out) - this helps with bookmarking, etc, but remember that if you add more content to the list, then that bookmarked result set will not always be what the user wanted...
If you're happy for these values to be lost when a session times out (by default around 20 minutes), then put them in Session.
If you think that sessions are going to timeout before the next request, or you want to save it across visits then you should be storing them in either cookies, or a profile (potentially allowing "Anonymous" profiles, which work with the users cookies, so they would lose them across machines).
Personally, I'd think very carefully about putting sort order and columns in the URL if you do you could actually end up really confusing search engines:
Lots of pages with very similar content (page 1, sorted by date desc, page 1 sorted by date asc, etc) - search engines don't like duplicate content, and nor should you as Google (for instance) will only show two pages from your site in a default result set, you want them to be valid, not duplicates.
Search engines will spend lots more time crawling your site, and potentially give up - If on every page they find links to "Sort by this column", they will attempt to follow them, resulting in more work on the server, higher bandwidth use, etc.
These can be mitigated through the use of a Robots.txt file denying access to sorted versions of the page, but if this is generated almost dynamically that will be very complex to maintain going forward.
In response to your update, a nice way to achieve that for pages would be to have links to "Previous" and "Next" pages of results (or better yet, a list of all pages in the list), output on the page, with the page numbers, that you then hide with JavaScript.
This way users should see your nice, AJAXy behaviour, and search engines (and users without JavaScript - mobile, or those using older screen readers for instance) will still be able to get access to all your pages - this will help your pages to degrade gracefully, or use "Progressive Enhancement".
Things that were previously in viewstate should probably be put back in the clients hands via either hidden fields or cookies.
Session is "too" easy. In a dev environment it works great, pretty much no matter what you put in it. In production scalability and persistence become a problem. In-process session is likely to disappear unexpectedly if you have crashing bug in your site, and requires server affinity when load balancing. Out-of process session fixes the durability and affinity issues, but can still be a performance bottle neck if too much stuff is put in session. A VERY common problem is that each page will put 1 or 2 items into session but never take them out again when they are done. And even if a page removes it session data when it is no longer needed, the data can still get orphaned if a user starts a process and never completes it.
Cookies is a fast and simple way to persist data between requests, and you can also make them live only for a limited time depending on your needs.
Session are easiest.
Related
I am little new to programming (especially to web designing). I have learned that the World Wide Web is based upon a protocol called HTTP. And also each and every item (I mean web pages, images, css & js files etc) are transferred according to the HTTP Requests. So my problem is this.
When we fill a web form (especially a login form like fb) and click ok, login or submit button, What Happens Next? Does it send another http request or does it use some special technique?
Is it safe or does anyone can hack our user names and passwords when that requests are traveling through internet?
It actually depends on the person who made it. They can create an output which would show the values entered or it can be entered to a database for other usage. There's so many things can be done and that would actually depend on the need of the user.
Added for 2nd question:
There are a number of ways to encrypt these data to avoid being hacked. If you use a very basic technique in transferring the values that you submit then there would be a huge possibility that it can be hacked. But, not to worry as there are plenty of ways to be safe.
Currently reading Bloch's Effective Java (2nd Edition) and he makes a point to state, in bold, that overusing POSTs in web applications is inherently bad. Unfortunately, he doesn't specify why.
This startled me, because when I do any web development, all I ever use are POSTs! I have always steered clear of GETs for security reasons and because it felt more professional (long, unsightly URLs always bother me for some reason).
Are there performance differentials between GET and POST? Can anyone elaborate on why overusing POSTs is bad, and why? My understanding - and preliminary searches - seem to all indicate that these two are handles very similarly by the web server. Thanks in advance!
You should use HTTP as it's supposed to be used.
GET should be used for idempotent, read queries (i.e. view an item, search for a product, etc.).
POST should be used for create, delete or update requests (i.e. delete an item, update a profile, etc.)
GET allows refreshing the page, bookmark it, send the URL to someone. POST doesn't allow that. A useful pattern is post/redirect/get (AKA redirect after post).
Note that, except for long search forms, GET URLs should be short. They should usually look like http://www.foo.com/app/product/view?productId=1245, or even http://www.foo.com/app/product/view/1245
You should almost always use GET when requesting content. Only use POST when you are either:
Transmitting sensitive information which should not appear in the URL bar, or
Changing the state on the server (adding/changing/deleting stuff, altough recently some web applications use POST to change, PUT to add and DELETE to delete.)
Here's the difference: If you want to give the link to the page to a friend, or save it somewhere, or even only add it to your bookmarks, you need the full URL of the page. Just like your address bar should say http://stackoverflow.com/questions/7810876/abusing-http-post at the moment. You can Ctrl-C that. You can save that. Enter that link again, you're back at this page.
Now when you use any action other than GET, there is simply no URL to copy. It's like your browser would say you are at http://stackoverflow.com/question. You can't copy that. You can't bookmark that. Besides, if you would try to reload this page, your browser would ask you whether you want to send the data again, which is rather confusing for the non-tech-savy users of your page. And annoying for the entire rest.
However, you should use POST/PUT when transferring data. URL's can only be so long. You can't transmit an entire blog post in an URL. Also, if you reload such a page, You'll almost certainly double-post, because the above described message does not appear.
GET and POST are very different. Choose the right one for the job.
If you are using POST for security reasons, I might drop a mention of other security factors here. You need to ensure that you send the data from a form submit in encrypted form even if you are using POST.
As for the difference between GET and POST, it is as simple as GET is used to send a GET request. So, you would want to get data from a page and act upon it and that is the end of everything.
POST on the other hand, is used to POST data to the application. I am talking about transactions here (complete create, update or delete operations).
If you have a sensitive application that takes, say and ID to delete a user. You would not want to use GET for it because in that case, a witty user may raise mayhem simply changing the ID at the end of the URL and deleting all random uses.
POST allows more data and can be hacked to send streams of files as well. GET has a limited size though.
There is hardly any tradeoff in using GET or POST.
I have an ascx control which works just fine. It is contained in a larger aspx page. I want to put it in the fragment cache, so I added the appropriate CacheOutput directive at the top. However, now the control on the underlying aspx.cs file has the control variable set to null the second time the page has loaded. I found a few places on the web where it said this would happen, but I also didn't find a solution to accessing the control.
What am I missing?
Also, can I control where it is cached? Can I make it cache in the browser cache rather than at the server?
Question #1: Output caching only stores the HTML result on the server. If you want to interact or run any code in the user control at all, you may not use full output caching. You may want to look into a lower-level caching, perhaps database or object caching, or embed another user control within this one that uses full output caching itself but the outer user control no longer does.
Question #2: "Can I control where it is cached?" If you use output caching, then no. That always means cache on the server. However, there are lots of different levels of caching. You can only cache a full HTTP response at the browser: a single HTML page, a CSS file, etc. If you want to cache only part of a page at the browser, but have the rest of the page dynamic, you would have to do it with some kind of JavaScript. Either HTML5 local storage, or via AJAX that has appropriate caching headers or responds with a 304 Not Modified response.
Side note: The term "fragment cache" is more often referred to "partial caching" in the ASP.Net world.
SO Tips: These are two questions, and should really be asked as two individual questions in the future.
Also, there are many ways to solve your problems here; if you provided more context to what you are doing and the performance problem you are trying to solve, we could offer more specific answers.
A site has 100's of pages, following a certain sitemap. A user can navigate to page2.aspx from page1.aspx. But if the user goes to page2.aspx directly say through a book marked URL, the user should be redirected to page1.aspx.
Edit: I dont want to go in and add code to every page that needs to fulfill this need.
Note: This is not a cross-page postback scenario.
You might consider something that is based off WorkFlow, such as this: http://blogs.msdn.com/mwinkle/archive/2007/06/07/introducing-the-pageflow-sample.aspx
The WCSF team also included a pageflow application block that you can use as a standalone add-on to your application.
I guess you could check the referrer, and if there isn't one / or it isn't page1.aspx then you could redirect back to page1.aspx.
As another answerer mentioned, you could use the Referrer header, but that can be faked by the client.
Since you don't want to modify each page, you could do something with an IHttpModule. Assuming you have some way of describing the valid page navigations, you could do something like this in the BeginRequest handler:
Check the session for a list of valid pages (using a default list for first visit if none are in the session).
If this request is for an invalid page, redirect to the place the user should be.
Based on this request, set up the list of valid pages and redirect page in the session so it's ready for the next request.
I recently worked with real code that checked to see if referrer was blank and used that as a step in authorization. The idea was users wouldn't be able to fake a referrer, you don't need a custom browser to fake a referrer. Users can book mark your page to delicious, then delicious.com is the referrer (and not blank).
I've had real arguments about how sophisticated a user needs to be to do certain hacks-- i.e. if users don't know how to set the referrer, then you can trust it. While true, it's unlikely your users will write a custom browser, but there already are Firefox addons to set headers, referrers etc and they're easy to use.
Josh has the best answer-- on page2 you should check the page hit log and see if the user has recently visted page1
I like alot of the answers above (specifically the workflow).
Another option, is creating each page as a usercontrol and having page1.aspx control what usercontrol gets loaded. This has the advantage of storing your workflow in a single place instead of on each page.
However, I don't think there's a magic bullet out there. It sounds like this security problem is an afterthought, or possibly reported as a bug, and you have been tasked with fixing it quickly and efficiently.
I would start weighing the answers here with their associated cost in hours.. I suspect the quickest solution will be to check referrer addresses on each page. Although hackable, it is obscure and if that risk is acceptable to you it may be the appropriate solution.
I have a grid with several thousand rows that can be filtered and sorted. On each row you can click a details button, which will bring you a new page with detailed information about the page. Because this is a button, you can't middle click or right click and open in a new tab. In addition, when clicking back you lose your filters and search results.
To solve this problem, I considered the following: Switch the buttons to links, and when filtering and searching, use get instead of post requests. This way, you could switch to new pages with a right click or middle click, and if you did follow a link normally, back would work properly.
This change was not made however. We were asked add a 'next result / previous result' set of buttons on the details page, that would allow you to navigate. While not an elegant solution, it would at least work.
I proposed adding querystring parameters to the details page, that would regenerate the search query based on filter, and allow you to get the next and previous results in code.
A team member took issue with this solution. He believes that it is a waste of server resources to re-query the database. Instead, a solution was proposed to add session variable that included a list of results. You could then use that to navigate.
I took issue with that because you can't have multiple tabs open without breaking navigation, and new results aren't appended to the list in real time. Also, if you worried about optimization, session would be the last thing to use since it eats memory and prevents server replication... unless you store the results back in the database.
What's the best solution?
Session doesn't sound like a winner, won't scale with lots of users.
Hitting the database repeatedly does seem unnecessary, but it depends on the cost - how many users, how often would they refresh/filter and what is the cost of that query?
If you do use querystrings you could cache the pages by parameter.
What about some AJAX code on that button to retrieve details - leave the underlying grid in place and display details in a div/panel or a new window/tab.