Call an ASMX webservice in javascript without resetting ASP.NET authentication timers - asp.net

Quick summary
Pinging back to a webservice in ajax from the client keeps the user's session alive, I don't want this to happen.
More extensive summary
For a website we're developing, we need the client to ping back (in js) to a webservice (ASMX) on the server (IIS7.5). This happens to let the server know that the user is still on the site, and hasn't browsed away to another site.
This is as our customer wants to let users lock records, but if they browse away to other sites, then let other people take over those locked records. Perhaps the distinction between the client being on the site but inactive and on another site seems unimportant, but that's kinda irrelevant, I don't get to write the UI spec, I just have to make it work.
My problem is this, the ping stops the user from being timed out on the server through the standard forms authentication timeout mechanism. Not surprising as there is that 30 second ping in the background keeping the session alive. Even though we want to know if the user is still on the site, we want the normal forms authentication timeout mechanism to be honoured.
I thought I might be able to fix this by removing the ASP.NET_SessionId and .ASPXAUTH cookies in the XMLHttpRequest that is the server ping, but I can't figure out how to do this.
This is how my web service & method are defined:
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ScriptService]
public class PingWS : WebService
{
[WebMethod]
public void SessionActive(string sessionID)
{
// does stuff here
}
This is how I'm calling it in js (request is over HTTPS, :
$.ajax({
type: "POST",
url: "PingWS.asmx/SessionActive",
data: 'sessionID=' + aspSessionID + '}',
beforeSend: function (xhr) {
xhr.setRequestHeader('Cookie', '');
xhr.setRequestHeader('Cookie', 'ASP.NET_SessionId=aaa; .ASPXAUTH=bbb;');
},
dataType: "json"
});
I was trying with the setRequestHeader, but that just appends to the header rather than overwrites the header, and IIS is happy to ignore that junk I added.
I'm thinking maybe I should be trying to do this at the server end, someone take PingWS.asmx out of the loop so that it doesn't keep the session active, but I'm not sure how to do this.
Although the title of the question is focused on clearing the cookie in the header, I'd be super happy if anyone points out that I'm being really stupid and there is actually a much better way of trying to do what I'm doing.
I'm thinking at this stage maybe I need to add something to the webmethod that says how long this particular page has been inactive, and use that knowledge to timeout manually. That actually sounds pretty easy, so I think I'll do that for now. I'm still convinced that there must be an easy way to do what I originally wanted to do though.
Update
I'm thinking I'm pretty screwed in terms of cookie manipulation here as both the .ASPXAUTH and ASP.NETSessionId cookie are HttpOnly, which means the browser takes them out of your hands, you can't access them via the document.cookies object. So I would say that leaves me with:
Updating my SessionAlive webmethod to track each request so I can tell how long the user has been sitting idle on a page and timeout if needs be
Marking the .asmx page somehow on the server end so that it's taken out of the normal authentication/session tracking flow
I know how to do 1. so I'll start there but 2. seems much cleaner to me.

You could restrict the authentication cookie to a given Path:
<authentication mode="Forms">
<forms path="/admin" />
</authentication>
This way the authentication cookie will only be sent to the /admin portion of your site and if you put your web service at the root you can ping it with AJAX without sending the cookie.
Another option is to simply host this webservice into a separate virtual directory.

As you're finding you're out of luck with client cookie manipulation (or clientanipulation of any kind). Manipulating headers might be possible but you'd have to intercept the traffic very early in the pipeline. I don't even know if it will be possible in the service itself but a traffic manager like Zeus could do it. I don't believe it's possible to configure the session engine in such a way as to ignore a given combination of endpoint and client request and although it should be possible to replace the entire session engine that will be undocumented and extremely time consuming I'd think.
Basically you need to manipulate traffic before it touches the service or you're not going to resolve this. Session was not designed to be variant.

There is a server side answer. Basically you disable sliding timeouts on forms authentication, then manually slide the timeout yourself, and skip it for the ping.
Pretty easy for me as every page uses the same root Site.Master and all pages inherit from the same base class.
Summed up here:
http://picometric.blogspot.com/2009/04/manual-sliding-expiration.html

Related

frequent GET requests stop being actually processed by HTTP handler / aways return same value

I have inherited this code which runs a 1-second-JQuery-Ajax-loop on the client side. It used to heavily exploit cookies and I am trying to change it to plain stateless HTTP at least, but now I have the following problem:
Every POST from the client is processed, and the first few GETs too, but after a short while the server-side HttpHandler is not even called on GET requests and the client code success callbacks always get passed the same - non-updated - data.
//edit: since people tend to assume otherwise: I have stepped through the code with a debugger, so when I say "handler is not called on get requests" and "client code success callbacks get passed the same data always" I mean that quite literally.
I figure this might be a problem of the Web Server caching responses to HTTP requests, but it's kind of a wild guess.
So I have a bunch of questions which might help me solve such problems in the future:
Is this a reasonable theory?
I would like to somehow have an overview over all the HTTP requests
the server registers and how he chooses to process them.
Also, where and how would I go about configuring the server beyond
the web.config, if for example I wanted to configure its caching
behaviour?
It's the clientside cache which is causing this.
Set cache to false on your AJAX request.
$.ajax({
url: "http://your.url.here",
cache: false
})
.done(function(data) {
// ...
});
More details here.

Which one is more secure between Response.Redirect or Server.Transfer on the same server?

I have been reading these two functions and am considering to pick up the one which is much more secure. I want to use Server.Transfer because it executes at the server side in a sense. Is it better to use?
Server.Transfer("myUrl.aspx?id=1");
or
Response.Redirect("myUrl.aspx?id=2");
Update:
My question is based on the client-side data security which comes from a previous page rather than a URL change.
tl;dr:
Neither Server.Transfer or Response.Redirect offers security advantages over the other. Strongly recommend not using Server.Transfer at all, as it is an anti-pattern of modern HTTP/web resource base paradigms, further explanation on that below. Use Response.Redirect and focus on authorization/identity for security concerns.
Neither offers more security than the other. The server/endpoint still allows HTTP/HTTPS requests, any request can be sent to the server by a malicious client.
You should prefer Response.Redirect over Server.Transfer. Server.Transfer is ASP.NET Web Forms "code smell". ASP.NET Web Forms has never respected HTTP, Restful, Stateless, resource request web paradigms, which is what the web is built on, obviously.
Server.Transfer is a very old method. Server.Transfer maintains the original URL in the browser. This can help streamline data entry for wizards, but it will also make for confusion when debugging.
Maintaining the original URL is also a perfect example of ASP.NET Web Forms doing what it wants, making life easier in the short term but impacting maintainability of the software in the long term. Maintaining the original URL is also a perfect example of going against the grain of HTTP/web protocols. It prevents the user from sharing the resource URL. And, even if you plan on that URL never being shared, there is always one use case where it is still always very helpful for the user/system/exception handling to be able to share the URL and it is to provide the correct place/resource the user is on, at a time of error or issue or even user question, for customer service/troubleshooting/debugging to better serve the user/customer/client.
Server.Transfer is an example of a shortcut. It has no security advantages, as the server/endpoints are exposed on port 80 to client requests whether responding with a different resource (Server.Transfer) or telling the client to redirect (Response.Redirect) and request another resource.
Regarding the "skipping" round trip advantage of Server.Transfer over Response.Redirect, it is a very small benefit considering that Server.Transfer is a web anti-pattern as I explained above. It guides developers to less elegant web systems architecture rather quickly as well.
Regarding the second parameter of Server.Transfer, perserveForm, setting perserveForm to True will maintain the form and query string and will still be available to the next page you are sending the user to but it is not advantageous enough to warrant use because it impacts long term maintainability of the web application.
perserveForm is also an anti-pattern to stateless, RESTful, resource based modern web applications/paradigms as I have been discussing above. If you need to maintain form state, across requests, it should be done on the client with local storage, it is not the responsibility of the server to maintain state for each client. perserveForm is yet another example of ASP.NET Web Forms, trying to make things easier for the developer in the short term but making code overly complex and difficult to maintain and debug in the long term.
Using Response.Redirect would be more secure if you use it like this:
if (!Request.IsLocal && !Request.IsSecureConnection)
{
if (Request.Url.Scheme.Equals(Uri.UriSchemeHttp, StringComparison.InvariantCultureIgnoreCase))
{
string sNonSchemeUrl = Request.Url.AbsoluteUri.Substring(Uri.UriSchemeHttp.Length);
// Ensure www. is prepended if it is missing
if (!sNonSchemeUrl.StartsWith("www", StringComparison.InvariantCultureIgnoreCase)) {
sNonSchemeUrl = "www." + sNonSchemeUrl;
}
string redirectUrl = Uri.UriSchemeHttps + sNonSchemeUrl;
Response.Redirect(redirectUrl);
}
}
As it converts an HTTP request to a secure HTTP request (HTTPS).
Both are equal to a security question...
Server.Transfer("myUrl.aspx?id=1");
Server.Transfer redirects from the server back end.
Response.Redirect("myUrl.aspx?id=2");
Response.Redirect comes to the front end, goes back to the back end, and redirects.
You can observe it if you debug both from the front and back end.

ASP.Net MVC3 - Is there a way to ignore a request?

I have an ASP MVC3 website with a rest API service.
When a user passes in an invalid API or they have been blacklisted i wish to ignore the response.
I know I could send back a 404 or pass back an 503 but if someone keeps polling me then I would ideally like to ignore the response causing a time-out their end. Thus delaying the hammering my server gets.
Is this possible within ASP.net MVC3? If so any help would be most appreciated.
Thank you
For what you want, you still need to parse the request, so it will always consume server resources, specially if you have an annoying user sending a query every 500ms...
In this situations you would block the IP / Header of the request for a period of, for example 10 minutes, but it would be a very good idea to block it on your load balancer and prevent that request that even reach your application, this is easily accomplish if you're using Amazon Services to run your Service, but all other cloud provider do support this as well, if by any means you are using a cloud hosting.
if you can only use your web application, and this is a solution that is not tested, you could add an ignored route to your routing mechanism like:
routes.IgnoreRoute("{*allignore}", new {allignore=#".*\.ignore(/.*)?"});
and upon check that the IP is banned, simple redirect using for example Response.Redirect() to your site, to a .ignore path... or, why not redirecting that request to google.com just for the fun of it?

How to secure my generic handler calls?

I am creating a myspace application and for some database entries I am using generic handlers which I have hosted on another website. From my myspace application I use ajax calls to those handlers to perform the activities that I want. I want to know how can I make these ajax calls secure? I mean I want to be sure that the handlers are being called by only the myspace app and not by entering url into the browser etc. Any ideas?
You can secure you Generic Web Handler by doing trick with UrlReferrer for e.g
if (context.Request.UrlReferrer == null)
{
context.Response.Write("Invalid Request");
return;
}
In addition you can check if UrlReferrer != null then domain Name must match with your incoming request url say for e.g.
if(Request.UrlReferrer.ToString().indexOf("http://www.tyamjoli.com")!=-1)
{
//Valid request
}
This is 100% impossible. Everyone will have access to your javascript and can modify it however they want. They can use TamperData to view all requests that the browser makes and drop/modify/replay them.
I don't know much about myspace apps but is there a server component to it? If so, you could first request a "token" from the app which would be the encrypted action and some arbitrary timeout, say 3 seconds. The token is then passed to the generic handler which decrypts it then checks the timeout. If valid, then the decrypted action is performed.
Outside factors such as network latency and un-synchronized clocks could keep some actions from being performed. This should hamper simple replay attacks but is still vulnerable to a scripted attack.

PageMethods security

I'm trying to 'AJAX-ify' my site in order to improve the UI experience. In terms of performance, I'm also trying to get rid of the UpdatePanel. I've come across a great article over at Encosia showing a way of posting using PageMethods. My question is, how secure are page methods in a production environment? Being public, can anyone create a JSON script to POST directly to the server, or are there cross-domain checks taking place? My PageMethods would also write the data into the database (after filtering).
I'm using Forms Authentication in my pages and, on page load, it redirects unauthenticated users to the login page. Would the Page Methods on this page also need to check authentication if the user POSTs directly to the method, or is that authentication inherited for the entire page? (Essentially, does the entire page cycle occur even if a user has managed to post only to the PageMethod)?
Thanks
PageMethods are as secure as the handler in which they reside.
FormsAuthentication will protect everything except the Login page.
On an unprotected handler, like login, you should expose only methods that 1) are not sensitive or 2) validate the user.
EDIT: in response to comments and other answers regarding CSRF and XSS please see http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx
You're trying to protect against CSRF attacks.
These attacks can be prevented by requiring an authorization code in the POST parameters, and supplying the auth code in the initial page load. (The auth code should be per-IP address and per-user, and should expire quickly)
For added security, you can make each auth-code only usable once, and have each request return a new auth-code. (However, if any request fails, you'll need to reload the page)
I am working on a project that heavily utilizes ASP.Net WebForms Page Methods which I talk to using Ajax. This is rather very convenient for me than writing all my codes in JavaScript.
However, Securing the page methods became an issue which troubled me. I see that I can access the page methods via Postman and Fiddler hence, enabling hackers to play with your APIs.
My solution was quite simple which I discovered accidentally. Adding a static Cookie request to the page method would return error for any app that is NOT the website.
[WebMethod]
[ScriptMethod(UseHttpGet = false, ResponseFormat = ResponseFormat.Json)]
public static string GetAnything(object dat)
{
HttpCookie myguid = HttpContext.Current.Request.Cookies.Get(Constants.Session.PreventHacking);
var hackguid = myguid.Value ?? ""; //other page method contents
return "anything";
}
A postman request to this method would return :
{
"Message": "There was an error processing the request.",
"StackTrace": "",
"ExceptionType": ""}
While a more detailed error would show if on LocalHost.
I understand there are browser ad-ons that can intercept API calls by sitting just beside the website. I have not tested this. A separate security fix has to be built for this however.
I'll update here once I perform some tests.
Think of Pagemethods like a mini webservie local to the page. The fact is they will have no extra checks and verifications in place except those that are placed on the entire website, and those that you choose to put in.
Using Pagemethods is a smart idea from the point of view of 'Encapsulation', and if you're going to use them it doesn't hurt trying to put in some extra security measures in place.

Resources