I’ve had a bit of feedback from some threat and vulnerability folks relating to websites returning HTTP 500 response codes. Essentially the advice is that all possible measures must be taken to avoid the server throwing a 500 (i.e. extensive form input validation) which is fine.
However, the advice also suggested that attempts to compromise security by means such as inserting a tag into a random query string causing ASP.NET request validation to fire or manipulating viewstate also should not return an HTTP 500. Obviously the native framework behaviour is to interpret the request and possibly throw to a custom error page but even this will return a 500 response code.
So I’m after some thoughts on how to approach this. Is there any way to configure the app at either the .NET level or IIS level to return an HTTP 200 when a 500 is raised? Or does this become a coding exercise at global.asax level in one of the application events? Are there other consequences to consider?
BTW, the rationale from the security side is that apps which return HTTP 500 may be viewed as “low hanging fruit” by bots randomly scanning for vulnerabilities and prompt further malicious activity. attempts I’m personally not convinced that changing response codes offers any real security gains but am happy to hapy to take the advice of the pros.
To answer your question: global.asax is the correct place, in particular, the Application_Error event handler. You should be able to do something like
Response.StatusCode = 200
Response.StatusDescription = "OK"
there.
PS: Don't do it. :-) To me, this sounds like yet another security-by-obscurity-by-violating-standards approach. I really don't think the (possibly marginal) security increase is worth breaking correct HTTP behavior (think about Google indexing your error pages, etc.).
Why? Sending back a 500 code without any other information does not give an attacker much information.
Provided you don't throw stack trace, state dumps etc, back to the client then you're fine. And I doubt very much you want to do that.
Seriously, just create a "500" page (picture of a whale optional) and return that with your status 500 - it gives nothing away.
Sending a 200 status when the page was NOT generated successfully is plain wrong and may cause bad things to happen - such as bots thinking it is a genuine page. You don't want your "500 error" page showing up on Google because you sent back a 200 status instead.
You can set custom error pages in config - and you could re-set the response status in your custom error page
<customErrors mode="On" defaultRedirect="~/DefaultErrorPage.htm" >
<error statusCode="500" redirect="~/CustomError.aspx"/>
</customErrors>
Serious warning...!
Make sure there aren't any errors EVER in your custom error page. Usually, you would use a html page, not an aspx page - but if you want to change the response headers you may need to use the aspx page. Don't do anything else on there except change the headers (i.e. don't display any data from a database or try to run any logic) and error on your custom error page will cause a "maximum number of redirects" error.
Related
I am at my wits end with the following problem:
My site www.sebastianthalhammer.com is available under that URL without any problems.
However Google Search Console as well as other external third party test tools return a 404 error.
Status report from Uptrends
It is just the main page that's affected. All the other subpages and blog content isn't affected.
I have been in contact with the server stuff but it seems alright to them. As mentioned. The site can be reached. The site runs on wordpress - latest version.
I have no real clue where to start as this error seems to be quite a tricky one. Does anyone here might have an idea what's going on?
Sebastian
The 4xx class of status code is intended for cases in which the
client seems to have erred. Except when responding to a HEAD
request, the server SHOULD include a representation containing an
explanation of the error situation, and whether it is a temporary or
permanent condition. These status codes are applicable to any
request method. User agents SHOULD display any included
representation to the user.
This leaves me with two possible explanations:
Explanation 1: it's a server error.
the server wrongly returns a 404 status code
the browser thinks the response body contains details about the error and displays it - for the end user this is the actual page
Explanation 2: it's done on purpose to defeat crawlers and page watchers.
the server returns 404 on purpose - non-browser user agents won't process the result as they interpret it as error
browsers are unaffected, the end user doesn't care as long as the page is being displayed
The second one would indeed be kind of clever if you don't want your page to be indexed.
Thanks to your feedback I could think about the problem in a different way.
Ultimately at the unholy depths of a certain plugin I could dig out a setting that caused the error.
It was a redirection plugin that (for whatever reason) sent out a 404 signal when the URL was requested.
I don't know what the purpose would be for something. All I know is that the setting was on default for quite a while now and that caused the weird situation.
thanks guys for getting me on the right track.
Sebastian
(This is sort of an abstract philosophical question. But I believe it has objective concrete answers.)
I'm writing an API, my API has a "status" page (like, https://status.github.com/).
If whatever logic I have in place to determine the status says everything is good my plan would be to return 200 OK, and a JSON response with more information about each service tested by my status page.
But what if my logic says the API is down? Say the database isn't responding or something.
I think I want to return 500 INTERNAL SERVER ERROR (or 503 SERVICE NOT AVAILABLE) along with a JSON response with more details.
However, is that breaking the HTTP Status Code spec? Would that confuse end users? My status page itself is working just fine in that case. So maybe it should return 200? But that would mean anyone using it would have to dig into the body looking for a specific parameter to determine the API's status vs. just checking the HTTP Status Code. (Also if my status page itself was broken, I'm fine with the end user taking that to mean the API is down since that's a pretty bad sign...)
Thoughts? Is there official protocol on how a status page should work?
https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
For me the page should return 200 unless has problems itself. Is true that is easier to check the status code of a response than parsing but using HTTP status codes to encode application informations breaks what people (and spiders) expect. If a spider passes for your page and sees a 500 or 503 will think your site has a page with problems, not that that page is ok and is signaling that the site is down.
Also, as you notice, it wont' be possible to distinguish between the service is down and the status page is down cases, with the last the only one that should send 500. Also, what if you show more than one service like the twitter status page ? Use 200.
Related: https://stackoverflow.com/a/943021/1536382 https://stackoverflow.com/a/34324179/1536382
I have a comment form on my website and I would like to stop any HTML from being posted through it. I was under the impression that ASP.NET automatically stops any HTML from being submitted by throwing a "potentially dangerous request" exception, but it's allowing HTML in this case.
All of the settings that relate to validation have been left to default so it should be set to requestValidationMode="4.0".
Anyone know what can cause this? Does it have anything to do with the fact that I am using AJAX callbacks?
Edit: I have gathered some more details:
Validation is correctly working in one sub-folder in my application, but it isn't working in any of the others. I looked into my web.config and this is the only setting I have put regarding page validation:
<pages enableViewStateMac="true" validateRequest="true">
Why is it working in one subfolder but not in the others? Does it have anything to do with the fact that this subfolder has a web.config entry regarding authentication?
Edit: Regular postbacks are being validated, just not callbacks.
Edit again: I was playing around with Fiddler and while doing so I noticed one of the callbacks was blocked by the server. Here is what the blocked request looks like:
And here is the plain text version:
__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=%2FwEPDwUKLTI4MDgzMDI5NA9kFgJmD2QWBAIBDxYCHgdwcm9maWxlBSFodHRwOi8vd3d3LnczLm9yZy8yMDA1LzEwL3Byb2ZpbGVkAgMPZBYOZg8WAh4Fc3R5bGUFkAFiYWNrZ3JvdW5kLXBvc2l0aW9uOnJpZ2h0IGJvdHRvbTtiYWNrZ3JvdW5kLXJlcGVhdDpOby1SZXBlYXQ7YmFja2dyb3VuZC1pbWFnZTp1cmwoaHR0cDovL2xvY2FsaG9zdDo4MS9JbWFnZXMvTGF5b3V0LUltYWdlcy9UaXRsZUJhY2tncm91bmQucG5nKTtkAgIPFgIeCWlubmVyaHRtbAWlAjxzY3JpcHQgdHlwZT0idGV4dC9qYXZhc2NyaXB0Ij48IS0tCmdvb2dsZV9hZF9jbGllbnQgPSAiY2EtcHViLTk2MTM2OTA0OTA1Mjg4MTQiOwovKiBCT0lHIFRvcCAqLwpnb29nbGVfYWRfc2xvdCA9ICI0MjcwOTc3NjY2IjsKZ29vZ2xlX2FkX3dpZHRoID0gNzI4Owpnb29nbGVfYWRfaGVpZ2h0ID0gMTU7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIDDxYCHwIFqgI8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI%2BPCEtLQpnb29nbGVfYWRfY2xpZW50ID0gImNhLXB1Yi05NjEzNjkwNDkwNTI4ODE0IjsKLyogQk9JRyBTaWRlYmFyICovCmdvb2dsZV9hZF9zbG90ID0gIjIyMDAxMDYxMTAiOwpnb29nbGVfYWRfd2lkdGggPSAxNjA7Cmdvb2dsZV9hZF9oZWlnaHQgPSA2MDA7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIEDw8WAh4HVmlzaWJsZWdkZAIFDxYCHwIFEjxwPkxpbmsgMSB0ZXh0PC9wPmQCBg9kFgICAw8WAh8DaGQCBw8WAh4EVGV4dAU%2BPHNwYW4gc3R5bGU9Im1hcmdpbi1yaWdodDogNXB4OyI%2BwqkgQmluZGluZ09mSXNhYWNHdWlkZTwvc3Bhbj5kZKMbP1fMlxWhgVw8zpEPBPGzlw5j&=sdfdsgd%40sfds.com&=%3Cp%3Efghfghfg%3C%2Fp%3E&=%3Cp%3Efghfghfg%3C%2Fp%3E&=%3Cp%3Efghfghfg%3C%2Fp%3E&__CALLBACKID=__Page&__CALLBACKPARAM=sdfdsgd%40sfds.com--%7C%7C--%3Cp%3Efghfghfg%3C%2Fp%3E--%7C%7C--%3Cp%3Efghfghfg%3C%2Fp%3E--%7C%7C--%3Cp%3Efghfghfg%3C%2Fp%3E
Here is a typical request that isn't blocked:
Plain text:
__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=%2FwEPDwUKLTI4MDgzMDI5NA9kFgJmD2QWBAIBDxYCHgdwcm9maWxlBSFodHRwOi8vd3d3LnczLm9yZy8yMDA1LzEwL3Byb2ZpbGVkAgMPZBYOZg8WAh4Fc3R5bGUFkAFiYWNrZ3JvdW5kLXBvc2l0aW9uOnJpZ2h0IGJvdHRvbTtiYWNrZ3JvdW5kLXJlcGVhdDpOby1SZXBlYXQ7YmFja2dyb3VuZC1pbWFnZTp1cmwoaHR0cDovL2xvY2FsaG9zdDo4MS9JbWFnZXMvTGF5b3V0LUltYWdlcy9UaXRsZUJhY2tncm91bmQucG5nKTtkAgIPFgIeCWlubmVyaHRtbAWlAjxzY3JpcHQgdHlwZT0idGV4dC9qYXZhc2NyaXB0Ij48IS0tCmdvb2dsZV9hZF9jbGllbnQgPSAiY2EtcHViLTk2MTM2OTA0OTA1Mjg4MTQiOwovKiBCT0lHIFRvcCAqLwpnb29nbGVfYWRfc2xvdCA9ICI0MjcwOTc3NjY2IjsKZ29vZ2xlX2FkX3dpZHRoID0gNzI4Owpnb29nbGVfYWRfaGVpZ2h0ID0gMTU7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIDDxYCHwIFqgI8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI%2BPCEtLQpnb29nbGVfYWRfY2xpZW50ID0gImNhLXB1Yi05NjEzNjkwNDkwNTI4ODE0IjsKLyogQk9JRyBTaWRlYmFyICovCmdvb2dsZV9hZF9zbG90ID0gIjIyMDAxMDYxMTAiOwpnb29nbGVfYWRfd2lkdGggPSAxNjA7Cmdvb2dsZV9hZF9oZWlnaHQgPSA2MDA7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIEDw8WAh4HVmlzaWJsZWdkZAIFDxYCHwIFEjxwPkxpbmsgNSB0ZXh0PC9wPmQCBg9kFgICAw8WAh8DaGQCBw8WAh4EVGV4dAU%2BPHNwYW4gc3R5bGU9Im1hcmdpbi1yaWdodDogNXB4OyI%2BwqkgQmluZGluZ09mSXNhYWNHdWlkZTwvc3Bhbj5kZCu6t45MzsFLBRWYDAvPXYbIXKqE&=&=&=&=&__CALLBACKID=__Page&__CALLBACKPARAM=cfdsdg%40adfds.com--%7C%7C--%3Cp%3Esdfdgdfg%3C%2Fp%3E--%7C%7C--%3Cp%3Esdfdgdfg%3C%2Fp%3E--%7C%7C--%3Cp%3Esdfdgdfg%3C%2Fp%3E
I don't know why the requests are different, nothing changed on the page. I noticed that the first one seems to have split the parameters into the boxes while the second one hasn't. Is this the issue?
I just checked the callbacks being sent from the sub-folder and they are all split into parameters just like the first request. I guess this is the problem... but why is it happening?
I made the inputs runat=server and the request changed a bit, but the values are still not being assigned.
Here is what I found:
___CALLBACKPARAM is not actually validated by the server, only the parameters as seen in the first image are. With this in mind I validated the request myself using the following regex <\/*(p|div|span|a|script|br|b|i|u|h1|h2|h3|ol|ul|li)\s*.*>. You can find more online; hope this helps.
So, from here...
In ASP.NET, you have a choice about how to respond to that - it's in the web.config as CustomErrors. Turn that on, then redirect to a fancy 404 page (maybe you already do). The fancy 404 page, then, could be checking the requested querystring (which gets passed over to the custom error page as yet another querystring) to see if it's a valid redirect, lives in your database, etc. Just do a Response.Redirect() from there.
Then schooner writes:
Thanks, we do have a 404 now but we would prefer this not to be detected as a 404 in the process. We would like ot handle it directly and seperately if possible.
..and I'd like to know just how bad a practice this is. I don't expect to put my "pretty" URLs on the internet (just business cards) and I have a sample of 404-redirecting-to-a-helpful-site code working, but I don't want to get to production and have an issue with a browser that takes the initial 404 too seriously. Can anyone help me understand more about why I wouldn't want to use customErrors / 404 to flow users to the page they actually wanted?
The main problem with using customeErrors as your 404 error handler is that every time customErrors picks up an errored request rather than throwing a 404 error back to your browser and letting your browser know there was a bad request, it instead returns a 302 which indicates that a page has been relocated to whatever your customErrors page is. This isn't bad for most users because they don't know or even notice the difference, the problem comes from the fact that web crawlers DO know the difference and the status code they receive directly affects how their indexing works.
Consider the scenario where you have a page at http://mysite.com/MyAwesomePageAboutStuff.aspx for some period of time and then one day you decide you no longer need it and delete the file. If Google or some other crawler has already indexed that URL and goes back to it after you delete it the crawler will get a 302 status code instead of a 404 error and because of this status code the crawler will update the page's url to point to your error page rather deleting the non-existent link. Now, whenever someone finds that url by way of a search engine they'll end up at your error page.
It's not really a huge issue, but you can definitely see the headaches this can create for your users in the long run.
Look here for some corroborating data.
I created a vanity url system using the 404 as the handler. There's no need for a 302 on my side as the 404 dynamically loads the content and returns that. I am fully able to handle any and all POST / GET and SERVER data.
Works great. If you are interested TarantulaHawk is up on SourceForge.
Here's an interesting one for you.
I've got my custom 500.aspx setup which is called when a 500 error occurs in my application. The 500.aspx also sends me an email with the error details.
I've noticed one small problem.
If you attempt an xss attack on the 500.aspx itself, the 500 page is not called.
This is obviously some sort of logic issue.
In fact, microsoft themselves suffer from the same issue.
See it in action here
http://www.microsoft.com/500.aspx?aspxerrorpath=%3Cscript%3Ealert(%22XSS%22)%3C/script%3E
How can I prevent this?
Ed
If you attempt an xss attack on any page, the custom error page will not be called (here's another random page on Microsoft.com with xss in the querystring).
The behavior appears to be intentional to stop the attack dead in its tracks. Even the error message indicates this behavior:
Request Validation has detected a
potentially dangerous client input
value, and processing of the request
has been aborted.
The only workaround appears to be to disable validation or to capture and handle the error in your global on Application_Error.
It appears that once you define a page to handle specific(or non specific?) errors, it is no longer available directly via its url, sorta like the Web.Config cant be called via the browser.
I would set up a 500Test.aspx which throws an exception causing a 500 error (and thus fires the 500.aspx)
That might work.
You might want to think about handling your errors in the Application_Error event in Global.asax.cs instead of the 500.aspx page. You could put the email code there, then redirect the user to an error page after you've done your error handling (this is how we do it where I work).