Should I use Request.Params instead of explicitly doing Request.Form? - asp.net

I have been using Request.Form for all my code. And if I need querystring I hit that explicitly too. It came up in a code review that I should probably use the Params collection instead.
I thought it was a best practice, to hit the appropriate collection directly. I am looking for some reinforcement to one side or the other of the argument.

It is more secure to use Request.Form. This will prevent users from "experimenting" with posted form parameters simply by changing the URL. Using Request.Form doesn't make this secure for "real hackers", but IMHO it's better to use the Form collection.

By using the properties under the request you are narrowing down the your retrieval to the proper collection (which is a good thing for readability and performance). I consider your approach to be a best practice and follow it myself.

I have always used
Request.Form("Param")
or
Request.QueryString("Param")
This is purely down to a syntax which is easier to read. I seriously doubt there is a performance impact.

The only time I use Request.Params instead of Form or Querystring is if I don't know whether the method by which the parameters will be passed in.
To put that in context, in 10 years I have used Request.Params in anger only once :)
Kindness,
D

I think it's better to use the Form and QueryString collections explicitly unless you're explicitly trying to define flexible behavior in your application like in a search form where you might want to have the search parameters definable in a URL or saved in cookies such as pagination preferences.

I would use Request.Form and Request.QueryString explicitly. The reason is that the two are not interchangable. The query string is used for HTTP Get requests, and FORM variables for HTTP post requests.
Get requests are typically applicable where you are requesting data, e.g. do a google search, the search words are in the query string. The post are when you are sending data to the web server for processing or storing. So when I say that the two are not interchangable I mean that you cannot change the page from using a GET to a POST without breaking functionality.
So IMHO, the implementation of the page can quite clearly reflect the fact that you intend it to be called by a GET or a POST request.
/Pete

Related

Determine if requester is an Ajax call and/or is expecting JSON (or another content type)

I have solved a problem with a solution I found here on SO, but I am curious about if another idea I had is as bad as I think it might be.
I am debugging a custom security Attribute we have on/in several of our controllers. The Attribute currently redirects unauthorized users using a RedirectResult. This works fine except when calling the methods with Ajax. In those cases, the error returned to our JS consists of a text string of all the HTML of our error page (the one we redirect to) as well as the HTTP code and text 200/OK. I have solved this issue using the "IsAjaxRequest" method described in the answer to this question. Now I am perfectly able to respond differently to Ajax calls.
Out of curiosity, however, I would like to know what pitfalls might exist if I were to instead have solved the issue by doing the following. To me it seems like a bad idea, but I can't quite figure out why...
The ActionExecutingContext ("filterContext") has an HttpContext, which has a Request, which in turn has an AcceptTypes string collection. I notice that on my Ajax calls, which expect JSON, the value of filterContext.HttpContext.Request.AcceptTypes[0] is "application/json." I am wondering what might go wrong if I were to check this string against one or more expected content types and respond to them accordingly. Would this work, or is it asking for disaster?
I would say it works perfect, and I have been using that for years.
The whole point use request headers is to be able to tell the server what the client accept and expect.
I suggest you read more here about Web API and how it uses exactly that technique.

How to detect and possibly drop/sanitize http request parameters/headers to prevent XSS attacks

Recently, we found that some of our SpringMVC based site's pages, that accept query parameters, are susceptible to XSS attacks. For e.g. a url like http://www.our-site.com/page?s='-(console.log(document.cookie))-'&a=1&fx=326tTDE could result in the injected JS to be executed in the context of the rendered page. These pages are all GET-based, no POST requests are supported.
These parameters are written in the markup in numerous places, so doing an HTML encode (in all these places) would be tedious and require more code changes. In some cases, they are also written to cookies.
Ideally, we would want to detect them as early as possible, say inside a Servlet Filter/Spring Interceptor and then for each request parameter, decide if we want to drop it all together, or sanitize it in some way, before it's available to the rest of the application. We would want this decision to be configurable as well, so that the approach to handle a particular request parameter can be modified over time without significant code change.
Now, since these are request parameters that we want to potentially modify, we would probably have to use an approach similar to the one described here, if we go the Filter way. We would potentially want to sanitize HTTP Request Headers similarly too.
So, what would be the most flexible/minimum overhead way to handle this situation? Would ESAPI be able to both detect and sanitize them, in a configurable way? It's not clear from its API as to what is possible. We would definitely not want to hand-roll regexes to do this. Also, would a Filter be the right place to handle this?
Thanks.

Using duplicate parameters in a URL

We are building an API in-house and often are passing a parameter with multiple values.
They use: mysite.com?id=1&id=2&id=3
Instead of: mysite.com?id=1,2,3
I favor the second approach but I was curious if it was actually incorrect to do the first?
I'm not an HTTP guru, but from what I understand there's not a definitive standard on the query part of the URL regarding multiple values, it's typically up to the CGI that handles the request to parse the query string.
RFC 1738 section 3.3 mentions a searchpart and that it should go after the ? but doesn't seem to elaborate on its format.
http://<host>:<port>/<path>?<searchpart>
I did not (bother to) check which RFC standard defines it. (Anyone who knows about this please leave a reference in the comment.) But in practice, the mysite.com?id=1&id=2&id=3 way is already how a browser would produce when a form contains duplicated fields, typically the checkboxes. See it in action in this w3schools example page. So there is a good chance that the whatever programming language you are using, already provides some helper functions to parse an input like that and probably returns a list.
You could, of course, go with your own approach such as mysite.com?id=1,2,3, which is not bad at all in this particular case. But you will need to implement your own logic to produce and to consume such format. Now you may or may not need to think about handling some corner cases by yourself, such as: what if the input is not well-formed, like mysite.com?id=1,2,? And do you need to invent yet another separator, if the comma sign itself can also be a valid input, like mysite.com?name=Doe,John|Doe,Jane? Would you reach to a point that you will use a json string as the value, like mysite.com?name=["John Doe", "Jane Doe"]? etc. etc.. Your mileage may vary.
Worth adding that inconsistend handling of duplicate parameters in the URL on the server is may lead to vulnerabilities, specifically server-side HTTP parameter pollution, with a practical example - Client side Http Parameter Pollution - Yahoo! Classic Mail Video Poc.
in your first approach you will get an array of querystring values but in second approach you will get a string of querystring values.
I guess it depends on technology you use, how it becomes convenient. I am currently standing in front of the same question using currency=USD,CHF or currency=USD&currency=CHF
I am using Thymeleaf and using the second option makes it easy to work, I can then request something like: ${param.currency.contains(currency.value)}. When I try to use the first option it seems it takes the "array" like a string, so I need to split first and then do contain, what leads me to a more mess code.
Just my 50 cents :-)

json asmx and that pesky d:

I have looked through lots of Posts and have not been successful in determining how to get rid of the pesky d in the response coming from my asmx web service, as in {"d":{"Response":"OK","Auth-Key":"JKPYZFZU"}}.
This is being created by my class 'public Dictionary UserDevice' by returning the Dictionary object.
I would be perfectly happy if the damn thing just wouldn't put it all into the d object!
Basically JSON array notation ['hello'] is valid JavaScript by itself whereas JSON object notation {'d': ['hello'] } is not by itself valid JavaScript. This has the consequence of the array notation being executable which opens up the possibility of XSS attacks. Wrapping your data in an object by default helps prevent this.
You can read more about why it's there in a post by Dave Ward. (edit: as pointed out by #user1334007, Chrome tags this site as unsafe now)
A comment by Dave Reed on that article is particularly informing:
It’s one of those security features that has a very easy to
misunderstand purpose. The protection isn’t really against
accidentally executing the alert in your example. Although that is one
benefit of ‘d’, you’d still have to worry about that while evaluating
the JSON to convert it to an object.
What it does do is prevent the JSON response from being wholesale
executed as the result of a XSS attack. In such an attack, the
attacker could insert a script element that calls a JSON webservice,
even one on a different domain, since script tags support that. And,
since it is a script tag afterall, if the response looks like
javascript it will execute as javascript. The same XSS attack can
overload the object or array constructors (among other possibilities)
and thereby get access to that JSON data from the other domain.
To successfully pull that off, you need (1) a xss vulnerable site
(good.com) — any site will do, (2) a JSON webservice that returns a
desired payload on a GET request (e.g. bank.com/getaccounts), (3) an
evil location (evil.com) to which to send the data you captured from
bank.com while people visit good.com, (4) an unlucky visitor to
good.com that just happened to be logged into bank.com using the same
browser session.
Protecting your JSON service from returning valid javascript is just
one thing you can do to prevent this. Disallowing GET is another
(script tags always do GET). Requiring a certain HTTP header is
another (script tags can’t set custom headers or values). The
webservice stack in ASP.NET AJAX does all of these. Anyone creating
their own stack should be careful to do the same.
You are probably using some kind of framework that automatically wraps your web service json responses with the d element.
I know that microsoft's JSON serializer adds the d on the server side, and the client-side AJAX code that deserializes the JSON string expects it to be there.
I think jQuery works this way too.
You can read a little more about this at Rick Strahl's blog
And there is a way for you to return pure json (without the 'd' element) using the WCF "Raw" programming model.

So why should we use POST instead of GET for posting data? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How should I choose between GET and POST methods in HTML forms?
When do you use POST and when do you use GET?
Obviously, you should. But apart from doing so to fulfil the HTTP protocol, are there any reasons to do so? Less overhead? Some kind of security thing?
because GET must not alter the state of the server by definition.
see RFC2616 9.1.1 Safe Methods:
9.1.1 Safe Methods
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
If you use GET to alter the state of the server then a search engine bot or some link prefetching extension in a web browser can wreak havoc on your site and (for example) delete all user data just by following links to your site.
There is a nice paper by the W3C about this: URIs, Addressability, and the use of HTTP GET and POST.
1.3 Quick Checklist for Choosing HTTP GET or POST
Use GET if:
The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup).
Use POST if:
The interaction is more like an order, or
The interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or
The user be held accountable for the results of the interaction
Because, if you use GET to alter state, Google can delete your stuff.
When do you use POST and when do you use GET?
How should I choose between GET and POST methods in HTML forms?
If you accept GETs to perform write operations then a malicious hacker could inject somewhere links to perform an unauthorized operation. Your user clicks on a link - and something is deleted from a database. Or maybe some amount of money is transferred away from the user's account if he's still logged in to their online banking.
http://superbank.com/TransferMoney?amount=1000&recipient=2342524
Send a malicious email with an embedded image referencing this link, and as soon as the document is opened, something funny has happened behind the scenes.
GET is limited by the length of URL the browser/server can handle. This used to be as short as 256 characters.
There is atleast one situation where you want a GET to change data on the server. That is when a GET returns data, and you need to record which data was given to a user and when it was given.
If you use complex data types then it must be in a POST it cannot be in a GET. For example testing a WCF web service in a browser can only be done when the contract uses simple data types.
Using GET and POST where it is expected helps to keep your program understandable.
When you use POST, you can see the information being "posted" in the address-bar of the web browser. This is [apparently] not the case when you use the GET method.
This article was somewhere on http://www.w3schools.com/ Once I've found the exact page it was on, I'll repost. :-)

Resources