Is it still safe to just use AjaxControlToolKit's NoBot Control, instead of using a Captcha control ?
The NoBot is check for
Posting back quickly
Posting back many times
Javascript on the browser is disabled.
What this mean, that when a crawler find the page and post back immediate, or did not support javascript then is not permit to post back. Now if the user did not support javascript, or if javascript throw an error for any reason, then is not working at all.
Is a very nice and good idea as alternative for the captcha for low traffic sites, that this mean for sites that you do not get so many spams, is not 100% effective but its work in most of the times. If a good spammer is attack you then can by pass this one, but if you do not have many attacks and you just like to avoid some low teck spammers, this can work.
Related
I have been searching around for a way to simply request webpages with HTML5. Specifically, I want to do an HTTP(S) request to a different site. It sounds like this is not possible at first due to obvious security reasons, but I have been told this is possible (maybe with WebSockets?).
I don't want to use iframes, unless there is a way to make it so the external site being requested does not know that it is being requested through an iframe.
I've been having a really difficult time finding this information. All I want to do is load a webpage, and display it.
Does anyone have some code they could share, or some suggestions?
Thanks.
Sounds like you are trying to to circumvent Same Origin Policy. Don't do that :)
If it is just the request you want (and nothing else), there are a number of ways to do it, namely with new Image in JavaScript or iframe. The site will know it is in an iframe by checking top.location.href != window.location.href.
funneling all your requests through a proxy might be a solution - the client addresses only the proxy and the actual url to retrieve would be a query string parameter.
best regards, carsten
Many browsers in Japan (EZWeb, i-mode, etc) don't allow meta refresh, and in fact, they may display warning messages such as "This page uses newer technology and cannot be displayed" in place of your webpage.
How can I tell if a mobile browser does not support meta-refreshing so that I can take different action in those cases?
Thanks
The best option for something like this is to display a link on the page with the meta-refresh. The traditional "click here if the page doesn't redirect you in 5 seconds" kind of thing. That's what has been done for years in the PC realm.
You should also consider an HTTP 304 with the Location: header if you are just redirecting.
If instead you want a page to reload after a specific amount of time, then you are stuck. Without JavaScript, there is no other method you can use to automatically do this.
Without JavaScript you're really limited to User Agent sniffing. To provide the best experience I would recommend use known UA strings to only send the meta-refresh to browsers you know can handle it and for those that you don't know send a plain HTML response that has a link for users to click on to do the refresh.
I'm bulding an ASP.NET website just to test my skills, and I'm using lots of callbacks that doesn't require a page refresh, and the URL doesn't change. In this example, assume I'm bulding a web-based Outlook with a treeview, a grid, and a detail pane.
Is there a standard (published or assumed) that says I should postback, or even update my URL from time to time?
The Standard you are probably looking for is called usability. DHTML, Ajax, or whatever you want to call it is fine until it breaks the users expectation of browser behavior. When the back button fails to work, and users can't bookmark the page exactly as they expect, you're doing it wrong.
I don't know about an official standard, but you may want to check out Gmail to see a good example of how something similar was done. The URL changes on the site much more often than the page refreshes.
What is the most standard or best way to persist data between requests?
Should I use cookies or session variables? I'm interested in keeping data like sort order, sort column, and page number (for paginiation).
I'm coming from a webforms background so normally this type of thing was automatically handled for me in the viewstate of the controls I was using.
update
I like the querystring idea, for searching and more meaningful URLs; however, I'm working on an "index/list" view, which consists of a View with header, and "control" options, like DDLs for filtering and a partial view that renders the table of data.
The DDLs use a $.load() to call an ActionResult on the controller, which returns the partial view, passing parameters there in the querystring, but since these are ajax requests the main page url of the user's browser does not get updated.
Is there a best-practice for taking querystrings off the main-page URL and using them in ajax requests to other ActionResults?
If you want it to survive only through one request/redirect TempData is your friend.
However, for things like your pagination, URL is the best method, for the ability to share links alone.
A standard way is to pass those sort of things via URL Query Parameters. You can modify your routing to expect certain URL variables. That way the pages become more search engine friendly as well.
It depends on how permanent you want the information to be:
Things like the page number should indeed be in the URL (as others have pointed out) - this helps with bookmarking, etc, but remember that if you add more content to the list, then that bookmarked result set will not always be what the user wanted...
If you're happy for these values to be lost when a session times out (by default around 20 minutes), then put them in Session.
If you think that sessions are going to timeout before the next request, or you want to save it across visits then you should be storing them in either cookies, or a profile (potentially allowing "Anonymous" profiles, which work with the users cookies, so they would lose them across machines).
Personally, I'd think very carefully about putting sort order and columns in the URL if you do you could actually end up really confusing search engines:
Lots of pages with very similar content (page 1, sorted by date desc, page 1 sorted by date asc, etc) - search engines don't like duplicate content, and nor should you as Google (for instance) will only show two pages from your site in a default result set, you want them to be valid, not duplicates.
Search engines will spend lots more time crawling your site, and potentially give up - If on every page they find links to "Sort by this column", they will attempt to follow them, resulting in more work on the server, higher bandwidth use, etc.
These can be mitigated through the use of a Robots.txt file denying access to sorted versions of the page, but if this is generated almost dynamically that will be very complex to maintain going forward.
In response to your update, a nice way to achieve that for pages would be to have links to "Previous" and "Next" pages of results (or better yet, a list of all pages in the list), output on the page, with the page numbers, that you then hide with JavaScript.
This way users should see your nice, AJAXy behaviour, and search engines (and users without JavaScript - mobile, or those using older screen readers for instance) will still be able to get access to all your pages - this will help your pages to degrade gracefully, or use "Progressive Enhancement".
Things that were previously in viewstate should probably be put back in the clients hands via either hidden fields or cookies.
Session is "too" easy. In a dev environment it works great, pretty much no matter what you put in it. In production scalability and persistence become a problem. In-process session is likely to disappear unexpectedly if you have crashing bug in your site, and requires server affinity when load balancing. Out-of process session fixes the durability and affinity issues, but can still be a performance bottle neck if too much stuff is put in session. A VERY common problem is that each page will put 1 or 2 items into session but never take them out again when they are done. And even if a page removes it session data when it is no longer needed, the data can still get orphaned if a user starts a process and never completes it.
Cookies is a fast and simple way to persist data between requests, and you can also make them live only for a limited time depending on your needs.
Session are easiest.
How to login without leaving RP by showing the OP login window in iframe ?
I am using Openid Provider for the login in my Website.
how to implement the login window inside the iframe.
Using an iframe is hugely frowned upon, since the user will be entering their credentials on a page that looks like it is your RP but is supposedly their OP instead. It teaches users to be phished.
If you're going to use an iframe anyway, very little special work has to be done. There are a few approaches you can take though. If you're taking the OpenID Identifier from the user on the page and will display an iframe based on the user input, then the easiest way is probably to use JavaScript when the user clicks "Login" to create an iframe and direct it at http://yoursite.com/redirect.aspx?openid=userSuppliedIdentifier. That page will perform OpenID discovery on the identifier and do the standard redirect to the OP, which will be limited to the iframe since that is where the request came from. The openid.return_to that you send to the OP will have to be to a special page that knows how to "pop out" of the iframe back into your main window. It's really a very similar flow to the popup window approach which I point you to a demo to below, but instead of a popup, you do it in an iframe.
Rather than an iframe, the recommended way if you don't want to send the users away momentarily from your site, is to use a popup window. Just one such example of this is DotNetOpenAuth's ajax login sample, but there are other ways to do it. It's always complicated to get it working across browsers and working securely. We'd need to know what web platform you're using (ASP.NET, PHP, Perl, Python, etc.) before going much further.
(In response to Andrew Arnott's response) I'm bothered that popups are considered the norm for redirects. It's true that Facebook has adopted this approach, but I don't think it's the final solution. From a UI/UX pov in other applications, we've tried to move away from popup windows in favor of inline types of user experience. (popup ads, for instance are extremely annoying) Popups in general are just aggravating. Hence javascript library third-party widgets such as thickbox/lightbox/shadowbox. These solutions allow for iframe loaded content.
Plaxo and Google provided an experiment showing something like a 92% return rate for users who signed in with a two click OpenID process, so the question isn't about return rate, and yes popups can work in that scenario, however...
What I think hasn't been solved is adoption rate, and this comes down to basic usability and user experience, and what most engineers seem to be missing is the fact that users are completely driven off by popups.
It's true that phishing is a problem, but I think the onus and burden for better security lies with the developer on this one, and not the user. For this reason, I still think an inline experience is best, and, unfortunately, iframes are the only methodology currently employable. There are solutions, however, to prevent phishing.
I see that you are discussing usage of iframes for OP authentication. Have you considered the fact that clickjacking becomes possible when using iframes? In fact, many OPs do not allow their pages to be included in an iframe, e.g. VeriSign, Yahoo, myOpenID, etc. They break out of iframes using the HTTP header X-FRAME-OPTIONS, or JavaScript like this:
if (top.location != location) {
top.location = self.location;
}
Take a look at http://ajaxian.com/archives/busting-framebusters-clickjacking-is-still-a-big-issue for more information.