I'm trying to 'AJAX-ify' my site in order to improve the UI experience. In terms of performance, I'm also trying to get rid of the UpdatePanel. I've come across a great article over at Encosia showing a way of posting using PageMethods. My question is, how secure are page methods in a production environment? Being public, can anyone create a JSON script to POST directly to the server, or are there cross-domain checks taking place? My PageMethods would also write the data into the database (after filtering).
I'm using Forms Authentication in my pages and, on page load, it redirects unauthenticated users to the login page. Would the Page Methods on this page also need to check authentication if the user POSTs directly to the method, or is that authentication inherited for the entire page? (Essentially, does the entire page cycle occur even if a user has managed to post only to the PageMethod)?
Thanks
PageMethods are as secure as the handler in which they reside.
FormsAuthentication will protect everything except the Login page.
On an unprotected handler, like login, you should expose only methods that 1) are not sensitive or 2) validate the user.
EDIT: in response to comments and other answers regarding CSRF and XSS please see http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx
You're trying to protect against CSRF attacks.
These attacks can be prevented by requiring an authorization code in the POST parameters, and supplying the auth code in the initial page load. (The auth code should be per-IP address and per-user, and should expire quickly)
For added security, you can make each auth-code only usable once, and have each request return a new auth-code. (However, if any request fails, you'll need to reload the page)
I am working on a project that heavily utilizes ASP.Net WebForms Page Methods which I talk to using Ajax. This is rather very convenient for me than writing all my codes in JavaScript.
However, Securing the page methods became an issue which troubled me. I see that I can access the page methods via Postman and Fiddler hence, enabling hackers to play with your APIs.
My solution was quite simple which I discovered accidentally. Adding a static Cookie request to the page method would return error for any app that is NOT the website.
[WebMethod]
[ScriptMethod(UseHttpGet = false, ResponseFormat = ResponseFormat.Json)]
public static string GetAnything(object dat)
{
HttpCookie myguid = HttpContext.Current.Request.Cookies.Get(Constants.Session.PreventHacking);
var hackguid = myguid.Value ?? ""; //other page method contents
return "anything";
}
A postman request to this method would return :
{
"Message": "There was an error processing the request.",
"StackTrace": "",
"ExceptionType": ""}
While a more detailed error would show if on LocalHost.
I understand there are browser ad-ons that can intercept API calls by sitting just beside the website. I have not tested this. A separate security fix has to be built for this however.
I'll update here once I perform some tests.
Think of Pagemethods like a mini webservie local to the page. The fact is they will have no extra checks and verifications in place except those that are placed on the entire website, and those that you choose to put in.
Using Pagemethods is a smart idea from the point of view of 'Encapsulation', and if you're going to use them it doesn't hurt trying to put in some extra security measures in place.
Related
I have a web application where I have used http-handlers and jQuery for AJAX call.
Now the problem is user can type the same URL in the browser which is generated by the jQuery and operation is being performed.
Can I send some token with the query string and then on server side I can look for the right token before performing any operation.
Hope that I have written my problem correctly.
You may need to handle this in a similar fashion to how it can be handled in the MVC framework. Here is a similar post that describes a potential solution.
The above technique is called
Cross Site Request Forgery
Risk Impact
An attacker can hijack logged in users session for performing malicious transactions.
Recommendations
It is recommended implementing Page token (a random token as an additional parameter in the request) for all transaction pages. This token should be randomly generated and should be unique for each user.
The suggested URL are
http://www.owasp.org/index.php/CSRF_Guard
http://www.cgisecurity.com/csrf-faq.html
var cg = new CSRFGuard();
cg.SetupCSRFTokenNameAndValue();
SessionManager.CustomerConfig.CsrfTokenName = cg.CsrfTokenName;
SessionManager.CustomerConfig.CsrfTokenValue = cg.CsrfTokenValue;
Thanks a lot.
I'm using HTML+JQuery as UI, Spring-Roo to generate service layer which contains Json object string conversion. It works well for us like the following sample code:
#RequestMapping(headers = "Accept=application/json")
#ResponseBody
public ResponseEntity<String> ArticleController.listJson() {
HttpHeaders headers = new HttpHeaders();
headers.add("Content-Type", "application/json; charset=utf-8");
List<Article> result = Article.findAllArticles();
return new ResponseEntity<String>(Article.toJsonArray(result), headers, HttpStatus.OK);
}
but after several sample pages developed, I have some questions:
1) We want to use Spring-Security as Access Control module, is that OK for this framework? How can server knows it is the same session request from the browser?
2) Instead of jsp server technology, pure HTML + JQuery is really OK? Because I see many Ajax code injected in the html, and many of them cannot be reused. As we know server technologies have the template that can maximizing the reusage of code. I'm worrying about the develop difficulty and maintenance efforts.
PS: Why we decided using HTML+JQuery+Json is because we directly get HTML+CSS from Art designer,
and we have plan to support different client besides browser, so Json might be a good choice.
Thanks.
1) We want to use Spring-Security as Access Control module, [...] How can server knows it is the same session request from the browser?
First the session must be somehow established on the server side. Use standard Spring Security login screen or call spring_security_login using ajax. In return the server will send a cookie with JSESSIONID. This cookie sent with every subsequent request (including AJAX requests) so the server knows which user calls REST methods. This is completely transparent.
Also when you logout (by calling j_spring_security_logout) the session as well as cookies are destroyed.
We are using this approach successfully (more over, due to historical reasons we are calling soap services from JavaScript!) and it works really well.
2) [...]pure HTML + JQuery is really OK? Because I see many Ajax code injected in the html, and many of them cannot be reused. [...]
True separation of concerns is the king. Keep JavaScript in one place (.js) file and HTML in other place (.html). They should never be mixed. Also keep your JavaScript code layered and stay away from DOM manipulations as much as possible (e.g. use client-side templating engines).
Moreover there is nothing preventing you from generating HTML during build so that common HTML snippets like headers and footers are included in every page.
Quick summary
Pinging back to a webservice in ajax from the client keeps the user's session alive, I don't want this to happen.
More extensive summary
For a website we're developing, we need the client to ping back (in js) to a webservice (ASMX) on the server (IIS7.5). This happens to let the server know that the user is still on the site, and hasn't browsed away to another site.
This is as our customer wants to let users lock records, but if they browse away to other sites, then let other people take over those locked records. Perhaps the distinction between the client being on the site but inactive and on another site seems unimportant, but that's kinda irrelevant, I don't get to write the UI spec, I just have to make it work.
My problem is this, the ping stops the user from being timed out on the server through the standard forms authentication timeout mechanism. Not surprising as there is that 30 second ping in the background keeping the session alive. Even though we want to know if the user is still on the site, we want the normal forms authentication timeout mechanism to be honoured.
I thought I might be able to fix this by removing the ASP.NET_SessionId and .ASPXAUTH cookies in the XMLHttpRequest that is the server ping, but I can't figure out how to do this.
This is how my web service & method are defined:
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ScriptService]
public class PingWS : WebService
{
[WebMethod]
public void SessionActive(string sessionID)
{
// does stuff here
}
This is how I'm calling it in js (request is over HTTPS, :
$.ajax({
type: "POST",
url: "PingWS.asmx/SessionActive",
data: 'sessionID=' + aspSessionID + '}',
beforeSend: function (xhr) {
xhr.setRequestHeader('Cookie', '');
xhr.setRequestHeader('Cookie', 'ASP.NET_SessionId=aaa; .ASPXAUTH=bbb;');
},
dataType: "json"
});
I was trying with the setRequestHeader, but that just appends to the header rather than overwrites the header, and IIS is happy to ignore that junk I added.
I'm thinking maybe I should be trying to do this at the server end, someone take PingWS.asmx out of the loop so that it doesn't keep the session active, but I'm not sure how to do this.
Although the title of the question is focused on clearing the cookie in the header, I'd be super happy if anyone points out that I'm being really stupid and there is actually a much better way of trying to do what I'm doing.
I'm thinking at this stage maybe I need to add something to the webmethod that says how long this particular page has been inactive, and use that knowledge to timeout manually. That actually sounds pretty easy, so I think I'll do that for now. I'm still convinced that there must be an easy way to do what I originally wanted to do though.
Update
I'm thinking I'm pretty screwed in terms of cookie manipulation here as both the .ASPXAUTH and ASP.NETSessionId cookie are HttpOnly, which means the browser takes them out of your hands, you can't access them via the document.cookies object. So I would say that leaves me with:
Updating my SessionAlive webmethod to track each request so I can tell how long the user has been sitting idle on a page and timeout if needs be
Marking the .asmx page somehow on the server end so that it's taken out of the normal authentication/session tracking flow
I know how to do 1. so I'll start there but 2. seems much cleaner to me.
You could restrict the authentication cookie to a given Path:
<authentication mode="Forms">
<forms path="/admin" />
</authentication>
This way the authentication cookie will only be sent to the /admin portion of your site and if you put your web service at the root you can ping it with AJAX without sending the cookie.
Another option is to simply host this webservice into a separate virtual directory.
As you're finding you're out of luck with client cookie manipulation (or clientanipulation of any kind). Manipulating headers might be possible but you'd have to intercept the traffic very early in the pipeline. I don't even know if it will be possible in the service itself but a traffic manager like Zeus could do it. I don't believe it's possible to configure the session engine in such a way as to ignore a given combination of endpoint and client request and although it should be possible to replace the entire session engine that will be undocumented and extremely time consuming I'd think.
Basically you need to manipulate traffic before it touches the service or you're not going to resolve this. Session was not designed to be variant.
There is a server side answer. Basically you disable sliding timeouts on forms authentication, then manually slide the timeout yourself, and skip it for the ping.
Pretty easy for me as every page uses the same root Site.Master and all pages inherit from the same base class.
Summed up here:
http://picometric.blogspot.com/2009/04/manual-sliding-expiration.html
I have a login form on the home page of an ASP.NET 3.5 website which for performance reasons needs to be accessed with a standard HTTP connection. Since the normal postback for an ASP.NET page is relative call for the post, it would mean that when the browser posts the values are sent unprotected.
I would like to do one of two things to make this secure:
Force the Postback to be secure to the same page
Send the post to a different page using an HTTPS connection
Is there a way to implement option one?
I'm also looking at the Authentication Service, but looking at the URL reference it is using a relative path:
Sys.Services._AuthenticationService.DefaultWebServicePath = '../Authentication_JSON_AppService.axd';
I don't see a way to override this to put in an HTTP path.
You could use Cross-Page posting:
http://msdn.microsoft.com/en-us/library/ms178139.aspx
You can change the form's action property with javascript to tell it to submit to a different page with https. I have done this and it works nicely.
You could also change it to submit to the same page with https, but I think asp.net would complain about that (not sure - never tried it).
sample script:
document.forms[0].action = "https://www.whatever.com/submit_page.aspx";
I've been using user controls extensively but never use a HttpHandler and was wondering if I am doing something suboptimal or wrong
Unfortunately your question is a little like "Should I use a sandwich or a cement mixer". HttpHandlers and User controls are completely different things.
HttpHandlers are used to process HTTP requests. For example, if you wanted to dynamically create an RSS feed, you could write an HTTP handler that handles all requests for ".rss" files, creates the output and sends it back to the user.
User controls are used within ASPX pages to encapsulate units of functionality that you want to re-use accross many pages.
Chances are, if you're using user controls successfully, you don't want to use HttpHandlers!
Basically a user control is a piece of server logic and UI. An HTTP Handler is only a piece of logic that is executed when a resource on your server is requested. For example you may decide to handle requests for images sent to your server through your own handler and serve images from a database instead of the file system. However, in this case there's no interface that the user sees and when he visits a URL on your server he would get the response you constructed in your own handler. Handlers are usually done for specific extensions and HTTP request types (POST, GET). Here's some more info on MSDN: http://msdn.microsoft.com/en-us/library/ms227675(VS.80).aspx
Expect a better answer (probably before I finish typing this) but as a quick summary.
A user control is something that can be added to a page.
A HttpHandler can be used instead of a page.
Just to clarify the question. I was reading the Hanselman post
http://www.hanselman.com/blog/CompositingTwoImagesIntoOneFromTheASPNETServerSide.aspx
and thinking that I would never solved the problem with a HttpHandler, maybe with a simple page returning a binary content.
This led me to think that I should add HttpHandler to my developer tool belt.
Even an Asp.Net page is an HttpHandler.
public class Page : TemplateControl, IHttpHandler
A user control actually resides within the asp.net aspx page.