Why are SOAP and GET disabled in asmx webservices by default? - asmx

I'm about to turn on the missing protocols for my asmx webservices. They're already behind two layers of authentication and has a role checking attribute, so otherwise it is secure.
This MS KB article explains GET and SOAP are disabled for asmx by default, while POST is enabled by default, but doesn't say why other than "security reasons." Is this just superstition? Why did they do that? It seems that having POST enabled is just as insecure as having GET enabled.
I suppose this reduced the attack surface, but disabling everything until someone invokes the webservice by a particular protocol would be even more secure than leaving POST enabled.

The actual link is INFO: HTTP GET and HTTP POST Are Disabled by Default .
The GET and POST protocols cannot support SOAP Headers. These are required by many services for security purposes.
Additionaly, these protocols are not used that often for pure SOAP Services (as the protocol specifies the use of POST). Having them open leaves a door open that nobody will be watching. Bad people may sneak in.

Related

Which web server should http security headers be used in when web application is a consumer of a web service?

There is a web application and a web service. These are in separate web servers. Web application is a consumer of web service.
The questions is which web server should http security headers (e.g. Strict-Transport-Layer, X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, etc.) be used in?
In web service's web server or web application's web server that is consumer, or both of them as dual use? Which one is reasonable?
If you understand what each of them do, you will be able to tell which is needed where.
Strict-Transport-Security is about ensuring that clients only use https (and not plain http) to access content. This of course needs a compliant client. All browsers are like that, and hopefully some other clients also consider this header. Even if not, you should just send it from both services and webapps.
X-XSS-Protection is about explicitly enabling some cross-site-scripting protection in browsers (this practically means not running javascript on a page where the request contained the same javascript, to prevent some reflected XSS). It is still the best practice to send this for web apps, though it is now the default in browsers, and this does not really prevent more advanced XSS at all (the app itself needs to be correctly implemented). For backend services that only serve text/json and not text/html, this is irrelevant, also it it irrelevant if the client is not a browser (but for example a webapp). You can still send it from services too, it won't do any harm.
X-Frame-Options is to mitigate clickjacking among some other more niche attacks. It basically prevents the browser from opening the page in a frame. If the client is not a browser, this doesn't make a lot of sense, however, this might have implications on data leaks if used together with CORS headers. So again, you can just send this from services too, but not strictly necessary in the base case.
X-Content-Type-Options: nosniff is to mitigate an attack that used to performed against older versions of Internet Explorer (and maybe some other browsers as well) where they incorrectly determined file types and sent content type accordingly, especially during file downloads. I think this is no longer feasible with modern browsers, but the best practice is to still send this. It has no effect for non-browser clients, but does no harm either.
So in short, most of these are only relevant for web applications, and not backend services that do not serve html. However, you can and probably should still send these from services too, they will just do nothing in most cases, and might help when for example an attacker make a user open a service response in a browser somehow.

HTTP Post Header Data can be seen in web browser

In Firefox using FireBug addon, the data you entered can be seen in POST headers. I wonder if this is a security flaw in the asp.net web application.
It's not. If a browser doesn't expose the headers, other tools will. They're public by nature except to the degree that https encryption keeps third parties from reading them.

Are webservices exposed to any one?

I'm very new to web services (please note, not WCF but the old fashioned .asmx files).
Now I may be liking this too much to ports, but if I expose a port on my web facing server then it is exposed to attacks as well as my own use; There are tools which can scan to see what ports are open.
Is this true of a web service? Now, don't get me wrong, I know each service should be coded well enough that nothing malicious can happen or that the calling class doesn't know the 'contract' to implement them, but that's not the question (and I guess port flooding could still occur?); If I put up a few web services on a server, is there a tool/program which can detect them (by name)?
Yes, a web service is basically a web page that takes arguments and response with a formatted result that can be read more easily by a program (technically both are a result of a http request and response - there are other mechanisms as well, but the typical one is over the http protocol).
If you type the link to your web service in a browser you will see you are presented with an interface that allows you to "execute" its services.
Therefor you need the same security as with a web page, meaning login or check of credentials, tokens, signing, encryption and so forth (preferably on a ssl-connection).

Should I verify HTTP Referer in OAuth 2 Callback?

I am successfully able to authenticate Facebook and Google accounts using my Oauth2 servlets. I am using state with a timer and a session cookie to try to verify that it is indeed a legitimate Oauth callback.
Is there any benefit if I also examine the HTTP Referer header to ensure that I was redirected from the provider's OAuth page?
If no benefit, could there be a problem if I also examine the HTTP Referer field?
No.
I can simulate any headers I want as a malicious attacker. I can make it look like I'm coming from http://cia.fbi.gov.vpn/uber1337h4x. This is obvious and well known.
Any pages coming from HTTPS do not send a refer header as per RFC2616 sec15:
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.
Breaks usability as per RFC2616 sec15:
Because the source of a link might be private information or might reveal an otherwise private information source, it is strongly recommended that the user be able to select whether or not the Referer field is sent.
In short, you are not given greater security. Your security is not in inspecting a vastly insecure transport protocol, it's in the OAuth layer. You also break usability.
Don't do it.
The answer is:
No, you shouldn't use it, and there is NO valuable benefit of doing it.
Authorization Servers are very aware of this also. And here was stated.
From the mailing list of OAuth-WG:
Callback URL pages SHOULD redirect to a trusted page immediately after receiving the authorization code in the URL. This prevents the authorization code from remaining in the browser history, or from inadvertently leaking in a referer header.
If you are worry about CSRF, you SHOULD NOT use the HTTP Referer as a technique to verify the origin of an authorization, that's why the parameter state is (which sound you're using).
If you worry about an specific security concern of the oauth2 protocol, there is a full section inside the draft.
If you worry about other Security Considerations, this is the source.
I suggest you give all your effort implementing all the validations around the param: state.
Edit:
After reading the nuances of the question, you are really answered your own question. The use of cookies (probably HTML5 local storage) for both cases, is the best solution we know so far.
The first nuance is about CSRF and one of the possible countermeasures available is Checking the HTTP Referer header, and this was already addressed in the protocol.
The second nuance, I'm not completly sure, but is probably a case of Extension Grant, this is because it sounds that you may work as an "auth proxy requester", same as SAML oauth2 extension.
Don't verify the HTTP referer; the "state" parameter (which it sounds you're using) is the approach OAuth 2.0 defines to defend against cross-site request forgery (CSRF) attacks.
You may want to have a look at the new O'Reilly book Getting Started with OAuth 2.0 by Ryan Boyd. It describes this and related security considerations.
Plain security was not a concern of the question because the state parameter is being used.
The main concerns I had in mind were:
Whether it is the same browser that my app sent to Facebook that's coming back to present a candidate token?
Whether the agent (browser-like agent) or agents are repeatedly doing OAuth requests and presenting me with bad OAuth tokens that cause my app to repeatedly contact Facebook with bad tokens leading to potentially adverse treatment by Facebook.
The only possible solution to the first problem is to also set a cookie in addition to using state. referer would help if most providers weren't using https.
The second problem has a nuance. The mis-behaving agents need not be directly controlled by a malicious entity. They may be normal users browsers redirected via some indirect means (a popular hijacked website, social engineering).
Because of the nuance there is a chance that the referer header may not be forged. However, https precludes any meaningful benefit.
Cookies definitely help in the second case also because if you are setting cookies in a POST no third-party website can cause them to be set and you cannot be flooded with bad OAuth responses due to hacked websites redirecting users en masse to OAuth you.
This is not a clear answer (or question) but hopefully this shows the nuances behind the question.

asp.net use requireSSL and still be able to check if the user is authenticated on Non-SSL pages

I have a webapplication (asp.net 3.5) with mixed SSL. All account related pages are delivered over SSL. Mostly all other pages are delivered over non-ssl. To automatically switch between HTTPS and HTTP I use this component. Lately there was a news item regarding the ability toch hijack user sessions on non-secure Wifi networks. This should be possible by catching the Cookie that is transmitted over non-ssl connections.
This triggered me to review my security choices in this webapplication. I've (again) grabbed this article from MSDN and tried the requireSSL=true property on my Formsauthentication. Before I've even started the webapplication I realized that my User.Identity will be null on non-SSL pages because the cookie containing this information isn't sent from and to the webbrowser.
I need a mechanism that Authenticates the user over a SSL connection... and remembers this authentication info, even on non-SSL pages.
While searching SO, I've found this post. It's seems to me that this is a good solution. But, I wonder if a solution can be found in storing login information in the Sessionstate? I'm thinking of catching the Application_AuthenticateRequest in the Global.asax. Checking if the connection is secure and check either the authcookie or Session. I don't know exactly how I'm going to implement this yet. Maybe you can think with me on this?
Unfortunately, you have conflicting requirements. You can't have a secure session over non-SSL, so I'd like to challenge your underlying assumption: why not have the whole site use SSL?
From the MVC FAQ (similar question answered by security guru Levi) asking for an attribute to not use SSL.
•The [RequireHttps] attribute can be used on a controller type or action method to say "this can be accessed only via SSL." Non-SSL requests to the controller or action will be redirected to the SSL version (if an HTTP GET) or rejected (if an HTTP POST). You can override the RequireHttpsAttribute and change this behavior if you wish. There's no [RequireHttp] attribute built-in that does the opposite, but you could easily make your own if you desired.
There are also overloads of Html.ActionLink() which take a protocol parameter; you can explicitly specify "http" or "https" as the protocol. Here's the MSDN documentation on one such overload. If you don't specify a protocol or if you call an overload which doesn't have a protocol parameter, it's assumed you wanted the link to have the same protocol as the current request.
The reason we don’t have a [RequireHttp] attribute in MVC is that there’s not really much benefit to it. It’s not as interesting as [RequireHttps], and it encourages users to do the wrong thing. For example, many web sites log in via SSL and redirect back to HTTP after you’re logged in, which is absolutely the wrong thing to do. Your login cookie is just as secret as your username + password, and now you’re sending it in cleartext across the wire. Besides, you’ve already taken the time to perform the handshake and secure the channel (which is the bulk of what makes HTTPS slower than HTTP) before the MVC pipeline is run, so [RequireHttp] won’t make the current request or future requests much faster.

Resources