Are JSON web services vulnerable to CSRF attacks? - http

I am building a web service that exclusively uses JSON for its request and response content (i.e., no form encoded payloads).
Is a web service vulnerable to CSRF attack if the following are true?
Any POST request without a top-level JSON object, e.g., {"foo":"bar"}, will be rejected with a 400. For example, a POST request with the content 42 would be thus rejected.
Any POST request with a content-type other than application/json will be rejected with a 400. For example, a POST request with content-type application/x-www-form-urlencoded would be thus rejected.
All GET requests will be Safe, and thus not modify any server-side data.
Clients are authenticated via a session cookie, which the web service gives them after they provide a correct username/password pair via a POST with JSON data, e.g. {"username":"user#example.com", "password":"my password"}.
Ancillary question: Are PUT and DELETE requests ever vulnerable to CSRF? I ask because it seems that most (all?) browsers disallow these methods in HTML forms.
EDIT: Added item #4.
EDIT: Lots of good comments and answers so far, but no one has offered a specific CSRF attack to which this web service is vulnerable.

Forging arbitrary CSRF requests with arbitrary media types is effectively only possible with XHR, because a form’s method is limited to GET and POST and a form’s POST message body is also limited to the three formats application/x-www-form-urlencoded, multipart/form-data, and text/plain. However, with the form data encoding text/plain it is still possible to forge requests containing valid JSON data.
So the only threat comes from XHR-based CSRF attacks. And those will only be successful if they are from the same origin, so basically from your own site somehow (e. g. XSS). Be careful not to mistake disabling CORS (i.e. not setting Access-Control-Allow-Origin: *) as a protection. CORS simply prevents clients from reading the response. The whole request is still sent and processed by the server.

Yes, it is possible. You can setup an attacker server which will send back a 307 redirect to the target server to the victim machine. You need to use flash to send the POST instead of using Form.
Reference: https://bugzilla.mozilla.org/show_bug.cgi?id=1436241
It also works on Chrome.

It is possible to do CSRF on JSON based Restful services using Ajax. I tested this on an application (using both Chrome and Firefox).
You have to change the contentType to text/plain and the dataType to JSON in order to avaoid a preflight request. Then you can send the request, but in order to send sessiondata, you need to set the withCredentials flag in your ajax request.
I discuss this in more detail here (references are included):
http://wsecblog.blogspot.be/2016/03/csrf-with-json-post-via-ajax.html

I have some doubts concerning point 3. Although it can be considered safe as it does not alter the data on the server side, the data can still be read, and the risk is that they can be stolen.
http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx/

Is a web service vulnerable to CSRF attack if the following are true?
Yes. It's still HTTP.
Are PUT and DELETE requests ever vulnerable to CSRF?
Yes
it seems that most (all?) browsers disallow these methods in HTML forms
Do you think that a browser is the only way to make an HTTP request?

Related

Is there an real advantage on using the correct HTTP methods?

Most people use GET and POST for all of their requisitions, are there any major problems that should be considered a real reason to respect the correct semanthics?
What are the disadvantages on the lack of using "HEAD","PUT","DELETE","TRACE",etc.?
The semantics of every HTTP verb is known by clients (like browsers) and intermediate equipments as well. They treat requests accordingly (in terms of logging, caching, replaying, etc...)
Some examples:
A GET/HEAD request is considered "safe" and can be replayed a number of times without breaking anything. On the other hand, a POST request is not considered "safe" because it implies a remote state modification. That's why when you navigate by posting a form, and you click the "back" button of your browser, your browser asks if you want to POST the form again
A GET request should be used to retrieve a remote resource. That's why the resource can be cached in intermediate HTTP-aware equipments in order to reduce the number of requests on the backend systems. The response of a GET request is considered cacheable (it's implemented in Service Workers, web servers, HTTP proxies, etc...).
If you use a GET request in place of a POST, that means you're sending the payload in the URL and that means potential sensitive data logged by the backend, and all of the intermediate equipments (an URL is considered as a loggable non sensitive information)
About PUT, PATCH, DELETE, there is no major concern about using a POST instead of one of these, they're here to help you build self-documented RESTful APIs, define fine-grained authorizations on the endpoints and are pretty similar to POST when we forget the semantics (non cacheable, non replayable, hold the data in the body, if concerned)
GET requests are generally for requesting data and POST is for sending data. The biggest difference is that GET parameters are visible in the URL after a "?" while POST is sent in the header of the request. POST requests are more secure in this way and should be used to send sensitive data such as passwords. GET requests can be used to send parameters through just the URL. For example if your website dynamically loads pages based on parameters, it is useful to use get. Youtube uses get to pass the video id.
https://www.youtube.com/watch?v=dQw4w9WgXcQ

asp.net and Cross Site Request Forgery

We recently ran an Appscan aganist an application and on a few pages the report shows:
The following changes were applied to the original request:
Set HTTP header to 'http://bogus.referer.ibm.com'
Reasoning:
The same request was sent twice in different sessions and the same response was received.
This shows that none of the parameters are dynamic (session identifiers are sent only in
cookies) and therefore that the application is vulnerable to this issue.
I'm a bit confused on how to handle this, should i just look at the Request.UrlReferrer and make sure it's the same host as what's in the URL or is there a better way to handle this?
Thanks.
The Referrer header can be spoofed quite easily. You need to use CSRF tokens (I recommend the Synchronizer Token Pattern) that will prove the origination of the request. There is an awesome resource at OWASP that you should definitely read. Good luck!

CSRF Protection for non form post requests

Implementing CSRF tokens in hidden form fields is the standard protection for CSRF for form post requests.
However, how would you implement this for GET requests? Or ajax requests that POST json data instead of x-www-form-urlencoded for the request body? Are these types of things all handled on a case by case ad hoc basis?
OWASP says this about CSRF and GET requests:
The ideal solution is to only include the CSRF token in POST requests
and modify server-side actions that have state changing affect to only
respond to POST requests. This is in fact what the RFC 2616 requires
for GET requests. If sensitive server-side actions are guaranteed to
only ever respond to POST requests, then there is no need to include
the token in GET requests.
Also, OWASP notes:
Many implementations of this control include the challenge token in
GET (URL) requests [...] while this control does help mitigate the risk
of CSRF attacks, the unique per-session token is being exposed for GET
requests. CSRF tokens in GET requests are potentially leaked at
several locations: browser history, HTTP log files, network appliances
that make a point to log the first line of an HTTP request, and
Referrer headers if the protected site links to an external site.
The trouble here is that if a user's token is leaked, you're still vulnerable - and it's all too easy to leak the token. I'm not sure that there's a good answer to your question that doesn't involve converting all of those GET requests to POST requests.
It's worth noting that the Viewstate feature in ASP.NET WebForms does offer some protection against CSRF, though it's very limited - in fact, it also only protects POSTback requests.
To state this more simply, you shouldn't use a GET request as an entry point to any function that does something beyond return a read only resource for a browser to render. So don't have an AJAX script make a GET based call to a URL like transferMoney.aspx?fromAcct=xyz&toAcct=abc&amount=20.
The HTTP specification states explicitly that HTTP GET requests should not have side effects. It's considered best practice to keep your GET requests idempotent whenever possible.
I've written an article about protecting ASP.NET MVC against CSRF, it spells out a practical approach to applying the AntiForgeryToken to POST controller methods on your site.
It depends on the CSRF-protection pattern you're applying. First off, CSRF applies to endpoints that change state. If you're GET requests change state, then I advise that you modify them to POSTs. Having said that, it is a valid point.
The ideal place to persist Tokens is in the AUTH Header. This is the lowest common denominator across FORM-POSTs, AJAX and GET requests. You could of course store the Token in a Cookie, but there is a vulnerability in this design when applied to a multi-domain site.
You can parse the Token from a hidden field during FORM-POST without issue. Not much point in changing that. Assuming that your GET requests are invoked with AJAX, you can leverage JQuery's ajaxSetup method to automatically insert the Token on every AJAX request:
$.ajaxSetup({
beforeSend: function(xhr) {
xhr.setRequestHeader("Authorization", "TOKEN " + myToken);
}
});
There is a relatively new pattern gaining traction called the Encrypted Token Pattern. It's described in detail here, and also on the official OWASP CSRF Cheat Sheet. There is also a working implementation called ARMOR, which may offer you the flexbility you're looking for across various types of requests.

Which is better, pass username/password as parameters in HTTP header or HTTP Body?

I am implementing an REST server.
I am going to receive the username, requestid and password for each request from user.
I have two choice, i can ask for users to pass those three parameters in http body or in http header.
Which will be better way of implementation and why?
Thanks in advance.
Header!
If I understand your question, you have something that you are going to pass with every single request. That means if you want to support safe requests like GET and HEAD, you only have two choices: The HTTP headers or the URL (typically via query parameters).
Since it includes authentication information, you should avoid putting it in the URL. Other than that, you say it is encrypted and an added layer of protection would be to do it over SSL but the header and body are equally safe/vulnerable, so it makes no difference from a security standpoint.
Putting it in the header also decouples it from the application state and also from the media type, which is a good thing. If you want to support JSON, XML and XHTML forms it makes no difference to your authentication parameters.

Customize the Authorization HTTP header

I need to authenticate a client when he sends a request to an API. The client has an API-token and I was thinking about using the standard Authorization header for sending the token to the server.
Normally this header is used for Basic and Digest authentication. But I don't know if I'm allowed to customize the value of this header and use a custom authentication scheme, e.g:
Authorization: Token 1af538baa9045a84c0e889f672baf83ff24
Would you recommend this or not? Or is there a better approach for sending the token?
You can create your own custom auth schemas that use the Authorization: header - for example, this is how OAuth works.
As a general rule, if servers or proxies don't understand the values of standard headers, they will leave them alone and ignore them. It is creating your own header keys that can often produce unexpected results - many proxies will strip headers with names they don't recognise.
Having said that, it is possibly a better idea to use cookies to transmit the token, rather than the Authorization: header, for the simple reason that cookies were explicitly designed to carry custom values, whereas the specification for HTTP's built in auth methods does not really say either way - if you want to see exactly what it does say, have a look here.
The other point about this is that many HTTP client libraries have built-in support for Digest and Basic auth but may make life more difficult when trying to set a raw value in the header field, whereas they will all provide easy support for cookies and will allow more or less any value within them.
In the case of CROSS ORIGIN request read this:
I faced this situation and at first I chose to use the Authorization Header and later removed it after facing the following issue.
Authorization Header is considered a custom header. So if a cross-domain request is made with the Autorization Header set, the browser first sends a preflight request. A preflight request is an HTTP request by the OPTIONS method, this request strips all the parameters from the request. Your server needs to respond with Access-Control-Allow-Headers Header having the value of your custom header (Authorization header).
So for each request the client (browser) sends, an additional HTTP request(OPTIONS) was being sent by the browser. This deteriorated the performance of my API.
You should check if adding this degrades your performance. As a workaround I am sending tokens in http parameters, which I know is not the best way of doing it but I couldn't compromise with the performance.
This is a bit dated but there may be others looking for answers to the same question. You should think about what protection spaces make sense for your APIs. For example, you may want to identify and authenticate client application access to your APIs to restrict their use to known, registered client applications. In this case, you can use the Basic authentication scheme with the client identifier as the user-id and client shared secret as the password. You don't need proprietary authentication schemes just clearly identify the one(s) to be used by clients for each protection space. I prefer only one for each protection space but the HTTP standards allow both multiple authentication schemes on each WWW-Authenticate header response and multiple WWW-Authenticate headers in each response; this will be confusing for API clients which options to use. Be consistent and clear then your APIs will be used.
I would recommend not to use HTTP authentication with custom scheme names. If you feel that you have something of generic use, you can define a new scheme, though. See http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p7-auth-latest.html#rfc.section.2.3 for details.
Kindly try below on postman :-
In header section example work for me..
Authorization : JWT eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyIkX18iOnsic3RyaWN0TW9kZSI6dHJ1ZSwiZ2V0dGVycyI6e30sIndhc1BvcHVsYXRlZCI6ZmFsc2UsImFjdGl2ZVBhdGhzIjp7InBhdGhzIjp7InBhc3N3b3JkIjoiaW5pdCIsImVtYWlsIjoiaW5pdCIsIl9fdiI6ImluaXQiLCJfaWQiOiJpbml0In0sInN0YXRlcyI6eyJpZ25vcmUiOnt9LCJkZWZhdWx0Ijp7fSwiaW5pdCI6eyJfX3YiOnRydWUsInBhc3N3b3JkIjp0cnVlLCJlbWFpbCI6dHJ1ZSwiX2lkIjp0cnVlfSwibW9kaWZ5Ijp7fSwicmVxdWlyZSI6e319LCJzdGF0ZU5hbWVzIjpbInJlcXVpcmUiLCJtb2RpZnkiLCJpbml0IiwiZGVmYXVsdCIsImlnbm9yZSJdfSwiZW1pdHRlciI6eyJkb21haW4iOm51bGwsIl9ldmVudHMiOnt9LCJfZXZlbnRzQ291bnQiOjAsIl9tYXhMaXN0ZW5lcnMiOjB9fSwiaXNOZXciOmZhbHNlLCJfZG9jIjp7Il9fdiI6MCwicGFzc3dvcmQiOiIkMmEkMTAkdTAybWNnWHFjWVQvdE41MlkzZ2l3dVROd3ZMWW9ZTlFXejlUcThyaDIwR09IMlhHY3haZWUiLCJlbWFpbCI6Im1hZGFuLmRhbGUxQGdtYWlsLmNvbSIsIl9pZCI6IjU5MjEzYzYyYWM2ODZlMGMyNzI2MjgzMiJ9LCJfcHJlcyI6eyIkX19vcmlnaW5hbF9zYXZlIjpbbnVsbCxudWxsLG51bGxdLCIkX19vcmlnaW5hbF92YWxpZGF0ZSI6W251bGxdLCIkX19vcmlnaW5hbF9yZW1vdmUiOltudWxsXX0sIl9wb3N0cyI6eyIkX19vcmlnaW5hbF9zYXZlIjpbXSwiJF9fb3JpZ2luYWxfdmFsaWRhdGUiOltdLCIkX19vcmlnaW5hbF9yZW1vdmUiOltdfSwiaWF0IjoxNDk1MzUwNzA5LCJleHAiOjE0OTUzNjA3ODl9.BkyB0LjKB4FIsCtnM5FcpcBLvKed_j7rCCxZddwiYnU

Resources