json asmx and that pesky d: - json.net

I have looked through lots of Posts and have not been successful in determining how to get rid of the pesky d in the response coming from my asmx web service, as in {"d":{"Response":"OK","Auth-Key":"JKPYZFZU"}}.
This is being created by my class 'public Dictionary UserDevice' by returning the Dictionary object.
I would be perfectly happy if the damn thing just wouldn't put it all into the d object!

Basically JSON array notation ['hello'] is valid JavaScript by itself whereas JSON object notation {'d': ['hello'] } is not by itself valid JavaScript. This has the consequence of the array notation being executable which opens up the possibility of XSS attacks. Wrapping your data in an object by default helps prevent this.
You can read more about why it's there in a post by Dave Ward. (edit: as pointed out by #user1334007, Chrome tags this site as unsafe now)
A comment by Dave Reed on that article is particularly informing:
It’s one of those security features that has a very easy to
misunderstand purpose. The protection isn’t really against
accidentally executing the alert in your example. Although that is one
benefit of ‘d’, you’d still have to worry about that while evaluating
the JSON to convert it to an object.
What it does do is prevent the JSON response from being wholesale
executed as the result of a XSS attack. In such an attack, the
attacker could insert a script element that calls a JSON webservice,
even one on a different domain, since script tags support that. And,
since it is a script tag afterall, if the response looks like
javascript it will execute as javascript. The same XSS attack can
overload the object or array constructors (among other possibilities)
and thereby get access to that JSON data from the other domain.
To successfully pull that off, you need (1) a xss vulnerable site
(good.com) — any site will do, (2) a JSON webservice that returns a
desired payload on a GET request (e.g. bank.com/getaccounts), (3) an
evil location (evil.com) to which to send the data you captured from
bank.com while people visit good.com, (4) an unlucky visitor to
good.com that just happened to be logged into bank.com using the same
browser session.
Protecting your JSON service from returning valid javascript is just
one thing you can do to prevent this. Disallowing GET is another
(script tags always do GET). Requiring a certain HTTP header is
another (script tags can’t set custom headers or values). The
webservice stack in ASP.NET AJAX does all of these. Anyone creating
their own stack should be careful to do the same.

You are probably using some kind of framework that automatically wraps your web service json responses with the d element.
I know that microsoft's JSON serializer adds the d on the server side, and the client-side AJAX code that deserializes the JSON string expects it to be there.
I think jQuery works this way too.
You can read a little more about this at Rick Strahl's blog
And there is a way for you to return pure json (without the 'd' element) using the WCF "Raw" programming model.

Related

Determine if requester is an Ajax call and/or is expecting JSON (or another content type)

I have solved a problem with a solution I found here on SO, but I am curious about if another idea I had is as bad as I think it might be.
I am debugging a custom security Attribute we have on/in several of our controllers. The Attribute currently redirects unauthorized users using a RedirectResult. This works fine except when calling the methods with Ajax. In those cases, the error returned to our JS consists of a text string of all the HTML of our error page (the one we redirect to) as well as the HTTP code and text 200/OK. I have solved this issue using the "IsAjaxRequest" method described in the answer to this question. Now I am perfectly able to respond differently to Ajax calls.
Out of curiosity, however, I would like to know what pitfalls might exist if I were to instead have solved the issue by doing the following. To me it seems like a bad idea, but I can't quite figure out why...
The ActionExecutingContext ("filterContext") has an HttpContext, which has a Request, which in turn has an AcceptTypes string collection. I notice that on my Ajax calls, which expect JSON, the value of filterContext.HttpContext.Request.AcceptTypes[0] is "application/json." I am wondering what might go wrong if I were to check this string against one or more expected content types and respond to them accordingly. Would this work, or is it asking for disaster?
I would say it works perfect, and I have been using that for years.
The whole point use request headers is to be able to tell the server what the client accept and expect.
I suggest you read more here about Web API and how it uses exactly that technique.

How to detect and possibly drop/sanitize http request parameters/headers to prevent XSS attacks

Recently, we found that some of our SpringMVC based site's pages, that accept query parameters, are susceptible to XSS attacks. For e.g. a url like http://www.our-site.com/page?s='-(console.log(document.cookie))-'&a=1&fx=326tTDE could result in the injected JS to be executed in the context of the rendered page. These pages are all GET-based, no POST requests are supported.
These parameters are written in the markup in numerous places, so doing an HTML encode (in all these places) would be tedious and require more code changes. In some cases, they are also written to cookies.
Ideally, we would want to detect them as early as possible, say inside a Servlet Filter/Spring Interceptor and then for each request parameter, decide if we want to drop it all together, or sanitize it in some way, before it's available to the rest of the application. We would want this decision to be configurable as well, so that the approach to handle a particular request parameter can be modified over time without significant code change.
Now, since these are request parameters that we want to potentially modify, we would probably have to use an approach similar to the one described here, if we go the Filter way. We would potentially want to sanitize HTTP Request Headers similarly too.
So, what would be the most flexible/minimum overhead way to handle this situation? Would ESAPI be able to both detect and sanitize them, in a configurable way? It's not clear from its API as to what is possible. We would definitely not want to hand-roll regexes to do this. Also, would a Filter be the right place to handle this?
Thanks.

Google Geocoding Recommendation

I am looking into utilizing Google Maps API to do some geocoding. I want to implement client side geocoding, to remove the possibility of request limitation.
I need to do some fairly complex logic on the result set, and I would prefer to do that in C# as it is a ASP.NET MVC application. However part of that logic is possibly makeing subsequent follow up requests and that again would require JavaScript.
So, my first thought is to make a service in my application to pass JSON results to and certain return types to trigger the subsequent request. That seems a little convoluted and want to know from the community if this seems like the best approach and if there are any libraries/third party tools that can help handle this situation.
I've an app that does something similar, with the complexity somewhat decoupled by using standardized events (within this app, not a W3 standard or anything)
Client uses native geolocation, SimpleGeo and Google Loader to guess where the user is and AJAX's that to the server.
Server uses client data, MaxMind, and user preferences to decide where to treat the user as being.
Server response is generic event data (as JSON response) that is converted by a generic AJAX response handler into one or more events triggered against the body element.
Depending on the page, various listeners are bound to the events and or namespaces (see jQuery namespaced events) and they handle the updated location events, e.g., getting different weather data, changing local search results
Some of those listeners in turn trigger other AJAX requests, the responses to those may also carry generic events to triggered...
This way there's no sequential code I have to write, i.e., I can add or remove behaviors (simple or complex) without changing anything else. jQuery Events are all I use, really nothing much to it after you decide how you'll pattern things.
Let me know if that's interesting to you and you want me to expand or clarify a part of it.
You may want to try this API:
http://code.google.com/apis/maps/documentation/geocoding/
It's far more REST like - no Javascript required. May work better with C#
In the end I found the best solution was to do as I stated in my question. Pass the JSON object to controller, do work, then return. Worked pretty well.

What is the difference between AJAH and AJAX?

While reading about ASP.NET Ajax toolkit I stumbled upon the term AJAH. What is it and how is it different from Ajax?
AJAH is someone's attempt to come up with a new buzzword to mean "Having JavaScript make an HTTP request and get back a blob of HTML instead of a blob of XML".
Since Ajax has been taken to mean "Having JavaScript make an HTTP request and get back anything at all" (most often JSON these days, but a blob of HTML is also very common), it is a pretty pointless attempt.
Quote taken from here.
With true AJAX, a call is made to the
server, the nicely formatted data is
returned and the client application
extracts the data from the xml, and
replaces whatever elements need to be
replaced on a page. With AJAH, a glob
of html is returned and slapped into
the page.
So basically, AJAH returns pure HTML, AJAX returns formatted data, such as JSON that is dealt with with by the client scripts.
Personally, I think this just looks like a term only a few developers have used. It's definitely not mainstream.

Should I use Request.Params instead of explicitly doing Request.Form?

I have been using Request.Form for all my code. And if I need querystring I hit that explicitly too. It came up in a code review that I should probably use the Params collection instead.
I thought it was a best practice, to hit the appropriate collection directly. I am looking for some reinforcement to one side or the other of the argument.
It is more secure to use Request.Form. This will prevent users from "experimenting" with posted form parameters simply by changing the URL. Using Request.Form doesn't make this secure for "real hackers", but IMHO it's better to use the Form collection.
By using the properties under the request you are narrowing down the your retrieval to the proper collection (which is a good thing for readability and performance). I consider your approach to be a best practice and follow it myself.
I have always used
Request.Form("Param")
or
Request.QueryString("Param")
This is purely down to a syntax which is easier to read. I seriously doubt there is a performance impact.
The only time I use Request.Params instead of Form or Querystring is if I don't know whether the method by which the parameters will be passed in.
To put that in context, in 10 years I have used Request.Params in anger only once :)
Kindness,
D
I think it's better to use the Form and QueryString collections explicitly unless you're explicitly trying to define flexible behavior in your application like in a search form where you might want to have the search parameters definable in a URL or saved in cookies such as pagination preferences.
I would use Request.Form and Request.QueryString explicitly. The reason is that the two are not interchangable. The query string is used for HTTP Get requests, and FORM variables for HTTP post requests.
Get requests are typically applicable where you are requesting data, e.g. do a google search, the search words are in the query string. The post are when you are sending data to the web server for processing or storing. So when I say that the two are not interchangable I mean that you cannot change the page from using a GET to a POST without breaking functionality.
So IMHO, the implementation of the page can quite clearly reflect the fact that you intend it to be called by a GET or a POST request.
/Pete

Resources