In what use cases is a web service likely to receive OPTIONS requests other than CORS? - http

I have recently implemented a CORS IDispatchMessageInspector applied through a BehaviorExtensionElement for services within a large project I am working on to allow for CORS support (arising from calling REST WCF web services from jQuery Ajax calls).
The current implementation intercepts all OPTIONS method calls to an endpoint with the CORS behavior specified and responds with the appropriate headers (and a 200). As it stands the service will expect to see OPTIONS requests only in the case of CORS requests, however I cannot guarantee that this will always be the case.
In the interests of future proofing and extensibility, what are the most common reasons for OPTIONS requests outside of CORS? Are there plans to extend the use of such requests in future WC3 specs (as this seems to suggest)? Are there any use cases that I should attempt to allow for?

It's the other way around.
A CORS preflight request will be an OPTIONS request including an Origin and Access-Control-Request-Method request header, by which you can recognize it as such.
Any other OPTIONS request is just that, and can be sent by any client for any reason.

WebDAV clients are known to use OPTIONS to probe for support for protocol levels and method support (see RFC 4918).

Related

How does EnableCors restrict the origin access

I have created a WebAPI controller as below
[EnableCors("http://localhost:1234", "*", "*"]
public class DummyController : ApiController
{
public string GetDummy()
{
return "Iam not DUMMY";
}
}
When I hit the service using ajax from my application which is hosted on locahost:5678 It throws error since it is not allowed but when I hit the same API from restclient like PostMan it returns data.
Questions
1) CORS restricts only ajax requests and not the normal HTTP requests because I believe postman sends normal http requests.
2) How does EnableCors restrict to provided origins? Consider if I modify the origin and referrer params in the ajax request I can fish the values. What strategy does CORS use to identify the referrer URL.
As W3C states HttpReferrer can be easily modified, one should not depend on its value to authorize the access. If that is the case What does EnableCors checking in behind to authorize the origin.
I could just change my origin in ajax request also. Please help me with this Iam pretty much confused
CORS restricts only ajax requests and not the normal HTTP requests because I believe postman sends normal http requests.
Yes, specifically browsers restrict Ajax requests — that is, browsers by default don’t allow frontend JavaScript code to access responses from cross-origin requests made with XMLHttpRequest, the Fetch API, or with Ajax methods from JavaScript libraries.
Servers don’t themselves enforce any restrictions on cross-origin requests; instead, servers send responses to any clients that make requests to them, including postman — and including browsers.
Browsers themselves always get the responses that any other client would; but just because the browser gets a response doesn’t mean the browser will allow frontend JavaScript code to access that response. Browsers will only expose a response for a cross-origin request to frontend code if the response includes the Access-Control-Allow-Origin header.
How does EnableCors restrict to provided origins?
It doesn’t. When you CORS-enable a server, the only effect that has is to cause the server to send additional response headers, based on the values of particular request headers it receives — in particular, the Origin request header.
Consider if I modify the origin and referrer params in the ajax request I can fish the values. What strategy does CORS use to identify the referrer URL.
Servers don’t (and can’t) do any validation of the Origin value to confirm it hasn’t been spoofed or whatever. But the CORS protocol doesn’t require servers to do that — because all CORS enforcement is done by browsers.
As W3C states HttpReferrer can be easily modified, one should not depend on its value to authorize the access. If that is the case What does EnableCors checking in behind to authorize the origin.
I could just change my origin in ajax request also. Please help me with this Iam pretty much confused
Browsers know the real origin of any frontend code that sends a cross-origin request, and browsers do CORS checks against what they know to be the real origin of the request — and not against the value of the Origin header.
Browsers are what set the Origin request header and send it over the network to begin with; they set the Origin value based on what they know to be the real origin, and not for their own use — because they already know what the origin is and that value is what they use internally.
So even if you manage to change an Origin header for a request, that won’t matter to the browser — it’s going to ignore that value and continue checking against the real origin.
cf. the answer at
In the respective of security, is it meaningful to allow CORS for specific domains?

How is it possible to enable CORS from server-side?

From my little experience in web API, same origin policy is a policy of browsers i.e, browser doesn't allow to make requests to other hosts rather than the origin. I wonder how it is possible to enable CORS from server side(talking about ASP.net Web API)?
This is how i enable CORS in webAPI
namespace WebService.Controllers
{
[EnableCors(origins: "*", headers: "*", methods: "*")]
public class TestController : ApiController
{
// Controller methods ...
}
}
If CORS is a browser thing, isn't it more logical to enable it from client side. Can anybody clear this out
Here’s an attempt at a short summary of how it works: The browser is where the same-origin policy and cross-origin restrictions are enforced. Specifically, browsers block frontend JavaScript code from being able to access responses from cross-origin requests—unless the servers the requests are made to send the response header Access-Control-Allow-Origin in responses.
In other words, the way for getting browsers to relax the same-origin policy is for servers to use the Access-Control-Allow-Origin header to indicate they’re opting in to cross-origin requests.
So, browsers are the place where any cross-origin restrictions are either being applied or relaxed.
One case that helps to illustrate how it works is a simple cross-origin POST. As long as a cross-origin POST doesn’t have any custom request headers that will trigger browsers to do a CORS preflight OPTIONS request, a browser will go ahead and make the request, even cross-origin. And the server that POST is sent to will go ahead and accept it and then send a response.
What happens then is where the cross-origin restrictions from browsers kick in—because if that POST request was sent from frontend JavaScript code using XHR or the Fetch API or an Ajax method from some JavaScript library, then unless the response includes the Access-Control-Allow-Origin header, browsers won’t allow the frontend code to access the response (even though the server accepted the POST and it succeeded).
Anyway, I hope the above helps to clarify what enabling CORS support in servers actually means, and what effects it has, and that the actual policy enforcement is performed by browsers.
Of course all of the above just describes the simplest case, where there are no characteristics of the request that will trigger browsers to do a CORS preflight OPTIONS request.
But still in that case, the policy enforcement is all performed by the browser—in fact even more so, in that, for example, browsers won’t allow a POST with custom headers to even be sent to a server to begin with unless the server explicitly indicates (in its response to the preflight OPTIONS) that the server has opted in to receiving cross-origin requests which include that custom header.

Supporting both ASP.NET Caching and ETag/Conditional GET in WCF WebHttp Service

I am trying to implement a REST web service with WCF that supports both caching and Conditional GETs.
I implemented basic caching following the instructions in MSDN: Caching Support for WCF Web HTTP Services. That means adding an [AspNetCacheProfile("MyOutputCacheProfile")] attribute to each of my web methods and adding appropriate entries to web.config. That seems to work correctly: cached responses are returned when identical arguments are passed to the web methods.
Then I added support for Conditional GET by calculating an ETag value and setting that on the response like this:
WebOperationContext.Current.OutgoingResponse.SetETag(myETag);
That sorta works: I can see the ETag header in the response the first time I call the web method.
But here's the problem: The next time I invoke that web method with the same arguments, a cached response is returned, and the cached response does not include the ETag header. (If I wait until cache expiration, or disable caching entirely, then the ETag headers are returned properly.)
So, is there any way get the cached responses to include that ETag value?
Update: After some more study and experimentation, I find that doing this causes the ETag header to be included in all cached responses:
HttpContext.Current.Response.Cache.SetETag(myETag);
If I call that, then I don't need to call the associated WebOperationContext...SetETag() operation to make everything work.
Is this the Right Way to do this?
Correct me if I am wrong. Restful service are more close to Http and Http caching says that
The goal of caching in HTTP/1.1 is to eliminate the need to send
requests in many cases, and to eliminate the need to send full
responses in many other cases. The former reduces the number of
network round-trips required for many operations; we use an
"expiration" mechanism for this purpose (see section 13.2). The latter
reduces network bandwidth requirements; we use a "validation"
mechanism for this purpose (see section 13.3).
Asp.net caching does not fall in any one of this category(neither expiration nor validation).The caching is only done on web server and IIS instead of executing the method, sends the stored response. Some how it does not fit in RESTful model.
To implement caching, we should add Cache Control Headers and Etag to response headers and then try to handle conditional Get. Please consult this excellent article.

Are JSON web services vulnerable to CSRF attacks?

I am building a web service that exclusively uses JSON for its request and response content (i.e., no form encoded payloads).
Is a web service vulnerable to CSRF attack if the following are true?
Any POST request without a top-level JSON object, e.g., {"foo":"bar"}, will be rejected with a 400. For example, a POST request with the content 42 would be thus rejected.
Any POST request with a content-type other than application/json will be rejected with a 400. For example, a POST request with content-type application/x-www-form-urlencoded would be thus rejected.
All GET requests will be Safe, and thus not modify any server-side data.
Clients are authenticated via a session cookie, which the web service gives them after they provide a correct username/password pair via a POST with JSON data, e.g. {"username":"user#example.com", "password":"my password"}.
Ancillary question: Are PUT and DELETE requests ever vulnerable to CSRF? I ask because it seems that most (all?) browsers disallow these methods in HTML forms.
EDIT: Added item #4.
EDIT: Lots of good comments and answers so far, but no one has offered a specific CSRF attack to which this web service is vulnerable.
Forging arbitrary CSRF requests with arbitrary media types is effectively only possible with XHR, because a form’s method is limited to GET and POST and a form’s POST message body is also limited to the three formats application/x-www-form-urlencoded, multipart/form-data, and text/plain. However, with the form data encoding text/plain it is still possible to forge requests containing valid JSON data.
So the only threat comes from XHR-based CSRF attacks. And those will only be successful if they are from the same origin, so basically from your own site somehow (e. g. XSS). Be careful not to mistake disabling CORS (i.e. not setting Access-Control-Allow-Origin: *) as a protection. CORS simply prevents clients from reading the response. The whole request is still sent and processed by the server.
Yes, it is possible. You can setup an attacker server which will send back a 307 redirect to the target server to the victim machine. You need to use flash to send the POST instead of using Form.
Reference: https://bugzilla.mozilla.org/show_bug.cgi?id=1436241
It also works on Chrome.
It is possible to do CSRF on JSON based Restful services using Ajax. I tested this on an application (using both Chrome and Firefox).
You have to change the contentType to text/plain and the dataType to JSON in order to avaoid a preflight request. Then you can send the request, but in order to send sessiondata, you need to set the withCredentials flag in your ajax request.
I discuss this in more detail here (references are included):
http://wsecblog.blogspot.be/2016/03/csrf-with-json-post-via-ajax.html
I have some doubts concerning point 3. Although it can be considered safe as it does not alter the data on the server side, the data can still be read, and the risk is that they can be stolen.
http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx/
Is a web service vulnerable to CSRF attack if the following are true?
Yes. It's still HTTP.
Are PUT and DELETE requests ever vulnerable to CSRF?
Yes
it seems that most (all?) browsers disallow these methods in HTML forms
Do you think that a browser is the only way to make an HTTP request?

Customize the Authorization HTTP header

I need to authenticate a client when he sends a request to an API. The client has an API-token and I was thinking about using the standard Authorization header for sending the token to the server.
Normally this header is used for Basic and Digest authentication. But I don't know if I'm allowed to customize the value of this header and use a custom authentication scheme, e.g:
Authorization: Token 1af538baa9045a84c0e889f672baf83ff24
Would you recommend this or not? Or is there a better approach for sending the token?
You can create your own custom auth schemas that use the Authorization: header - for example, this is how OAuth works.
As a general rule, if servers or proxies don't understand the values of standard headers, they will leave them alone and ignore them. It is creating your own header keys that can often produce unexpected results - many proxies will strip headers with names they don't recognise.
Having said that, it is possibly a better idea to use cookies to transmit the token, rather than the Authorization: header, for the simple reason that cookies were explicitly designed to carry custom values, whereas the specification for HTTP's built in auth methods does not really say either way - if you want to see exactly what it does say, have a look here.
The other point about this is that many HTTP client libraries have built-in support for Digest and Basic auth but may make life more difficult when trying to set a raw value in the header field, whereas they will all provide easy support for cookies and will allow more or less any value within them.
In the case of CROSS ORIGIN request read this:
I faced this situation and at first I chose to use the Authorization Header and later removed it after facing the following issue.
Authorization Header is considered a custom header. So if a cross-domain request is made with the Autorization Header set, the browser first sends a preflight request. A preflight request is an HTTP request by the OPTIONS method, this request strips all the parameters from the request. Your server needs to respond with Access-Control-Allow-Headers Header having the value of your custom header (Authorization header).
So for each request the client (browser) sends, an additional HTTP request(OPTIONS) was being sent by the browser. This deteriorated the performance of my API.
You should check if adding this degrades your performance. As a workaround I am sending tokens in http parameters, which I know is not the best way of doing it but I couldn't compromise with the performance.
This is a bit dated but there may be others looking for answers to the same question. You should think about what protection spaces make sense for your APIs. For example, you may want to identify and authenticate client application access to your APIs to restrict their use to known, registered client applications. In this case, you can use the Basic authentication scheme with the client identifier as the user-id and client shared secret as the password. You don't need proprietary authentication schemes just clearly identify the one(s) to be used by clients for each protection space. I prefer only one for each protection space but the HTTP standards allow both multiple authentication schemes on each WWW-Authenticate header response and multiple WWW-Authenticate headers in each response; this will be confusing for API clients which options to use. Be consistent and clear then your APIs will be used.
I would recommend not to use HTTP authentication with custom scheme names. If you feel that you have something of generic use, you can define a new scheme, though. See http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p7-auth-latest.html#rfc.section.2.3 for details.
Kindly try below on postman :-
In header section example work for me..
Authorization : JWT eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyIkX18iOnsic3RyaWN0TW9kZSI6dHJ1ZSwiZ2V0dGVycyI6e30sIndhc1BvcHVsYXRlZCI6ZmFsc2UsImFjdGl2ZVBhdGhzIjp7InBhdGhzIjp7InBhc3N3b3JkIjoiaW5pdCIsImVtYWlsIjoiaW5pdCIsIl9fdiI6ImluaXQiLCJfaWQiOiJpbml0In0sInN0YXRlcyI6eyJpZ25vcmUiOnt9LCJkZWZhdWx0Ijp7fSwiaW5pdCI6eyJfX3YiOnRydWUsInBhc3N3b3JkIjp0cnVlLCJlbWFpbCI6dHJ1ZSwiX2lkIjp0cnVlfSwibW9kaWZ5Ijp7fSwicmVxdWlyZSI6e319LCJzdGF0ZU5hbWVzIjpbInJlcXVpcmUiLCJtb2RpZnkiLCJpbml0IiwiZGVmYXVsdCIsImlnbm9yZSJdfSwiZW1pdHRlciI6eyJkb21haW4iOm51bGwsIl9ldmVudHMiOnt9LCJfZXZlbnRzQ291bnQiOjAsIl9tYXhMaXN0ZW5lcnMiOjB9fSwiaXNOZXciOmZhbHNlLCJfZG9jIjp7Il9fdiI6MCwicGFzc3dvcmQiOiIkMmEkMTAkdTAybWNnWHFjWVQvdE41MlkzZ2l3dVROd3ZMWW9ZTlFXejlUcThyaDIwR09IMlhHY3haZWUiLCJlbWFpbCI6Im1hZGFuLmRhbGUxQGdtYWlsLmNvbSIsIl9pZCI6IjU5MjEzYzYyYWM2ODZlMGMyNzI2MjgzMiJ9LCJfcHJlcyI6eyIkX19vcmlnaW5hbF9zYXZlIjpbbnVsbCxudWxsLG51bGxdLCIkX19vcmlnaW5hbF92YWxpZGF0ZSI6W251bGxdLCIkX19vcmlnaW5hbF9yZW1vdmUiOltudWxsXX0sIl9wb3N0cyI6eyIkX19vcmlnaW5hbF9zYXZlIjpbXSwiJF9fb3JpZ2luYWxfdmFsaWRhdGUiOltdLCIkX19vcmlnaW5hbF9yZW1vdmUiOltdfSwiaWF0IjoxNDk1MzUwNzA5LCJleHAiOjE0OTUzNjA3ODl9.BkyB0LjKB4FIsCtnM5FcpcBLvKed_j7rCCxZddwiYnU

Resources