In the past I have dealt with security issues related to Default Service Banners/Verbose Headers/Information Leakage via HttpResponse Headers. These issues are quite common, and usually look something like this for an Asp.Net - IIS Server.
Server: Microsoft-IIS/10.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
These types of issues are very common, and usually quite trivial to deal with, typically a web.config update or an URLRewrite rule to remove the verbose headers.
However, one issue I stumbled upon lately, is when the Server encounters an error, these headers are not removed. For example a 404 (not found) error will still have these headers appended on. In fact most error responses are not able to properly remove the information leakage headers. I did some searching and found out this issue is not very well documented, in fact it has never come up in one of our Pen-Tests specifically.
I am curious if any other developers have dealt with this issue, specifically information leakage in HttpResponse Headers when the response code is an Error. If so, how did you fix it. I am using Microsoft, Asp.Net, IIS technologies, but still curious if other technologies/servers have this issue.
Thanks to Cody (Security Champion) response.
reviving an old thread here, but we have had many applications to mitigate this problem for. we've actually done it at the load balancer level, so that all responses, error or otherwise, are treated the same
we're doing that with an iRule for F5 - iRules Home
https://clouddocs.f5.com/api/irules/
the irule has an array of blacklisted header names, and just removes those on every response
if you're trying to accomplish the same thing within AWS or some other managed load balancing mechanism, unfortunately i'm not sure how you would go about doing that
Related
I have an ashx handler, and the response is not gzipped. The content-encoding received by the client is empty.
The IIS settings for the site have static and dynamic compression enabled.
Research of similar problems shows some people have an httpCompression node in the web server node of IIS configuration editor. I do not have such a node. I have a url compression node, where I have set everything to true. Perhaps that is IIS version dependent. The op system is Windows Server 2008 R2.
I am about to try to "force" compression using the filter property and the GZipstream class (credit to Rick Strahl's blog). If anyone can tell my why IIS is not "auto compressing" or can point to any gotchas in my workaround I would be grateful.
Update: attaching GzipStream to the response filter reduced the content length by half as seen by the client, which seems to indicate the "manual" compression is doing something.
I am aware this was previously asked here:
.ashx handler not getting gzip compressed despite IIS Config setting
However, the previous question did not receive any answers, so I am asking the question again.
Please check if you are adding Accept-Encoding", "gzip" in request headers while making HTTP request.
We're dealing with a legacy application that emitted an HTML blurb.
This HTML blurb was previously consumed by a web browser. Now we're
turning (a new instance of) it into a service and bringing it into
the intranet, inside the corporate firewall to be consumed by automated clients.
The code was returning HTTP Code 200 OK for Not Found situations,
alongside with some HTML explaining that the information could not be
found. We're considering returning a 404 Not Found code, but we want
some way to tell responses from the application and from the web server
apart (in case something gets misconfigured and the app can no longer
be reached).
Now we get to the meat of the question: should we change de X-Powered-By
header in the application? Do web servers and proxies respect that? We can
certainly test our current web server. But can we count on future server
updates/changes to respect this behaviour? Is this header governed by any
spec (RFC, etc.)?
All of a sudden all of my websites on my server return 400 Bad Request Error. I don't have a clue what happened. App Pools are running in Classic pipeline mode (4.0, 2.0), doesn't matter.
Every URL that I type comes back as 400 Bad Request. Real URLs, even fake URLs that don't exist (which should come back as 404) all are 400.
http://mywebsite.com/AFile.aspx
http://mywebsite.com/AFolder/AnotherFile.aspx
http://mywebsite.com/Bfolder/YetAnotherSillyPage.aspx
http://mywebsite.com/A_stupid_URL_that_does_not_even_exist_fjfjffjfj.aspx
Everything 400 Bad Request. Totally screwed up my ASP.NET. Where should I begin to look? Machine.config? Web.config?
UPDATE:
After trying a million different settings, I finally set the App Pool to Integrated and set the Identity to LocalSystem and all of a sudden it works.
Bad Request usually is HTTP.sys stopping the request due to something really bad (like invalid URLs, or something like that).
You probably should look at HTTP.sys logs (Not IIS) at:
C:\Windows\System32\LogFiles\HTTPERR
Also, maybe something got broken in the http.sys configuration so try running:
netsh http show servicestate
And see if for your web site it has the correct bindings, for example it could be that the bindings are only listening on only specific IP Addresses and yet its coming from another one, or similar problem but with Host Name, etc.
Finally you might want to run:
C:\Windows\System32\inetsrv\appcmd list sites
And see if the bindings and status makes sense.
Have you tried some mixture of re-installing (or uninstall/install) asp.net using the aspnet_regiis.exe utility? That's fixed strange IIS/ASP.NET server issues for me in the past.
Have you looked in the event log for any error messages or further clues?
Getting sporadic errors from users of a CMS; Ajax requests sometimes result in a "501 Method not implemented" response from the server. Not all the time; usually works.
Application has been stable for months. Users seem to be getting it with Firefox 3. I've seen a couple references via Google to such problems being related to having "charset=UTF-8" in the Content-type header, but these may be spurious
Has anyone seen this error or have any ideas about what the cause could be?
Thanks
Ian
You may want to check the logs of the server to see what's causing the issue. For example, it might be that these requests are garbled, say, because of a flaw in the HTTP 1.1 persistent connection implementation.
Try this
Try clearing your cookies and your cache
Type about:config into the URL bar, list of configuration settings for Firefox
Locate the setting for 'network.automatic-ntlm-auth.trusted-uris'
Set the value of names of the servers to use NTLM with.
Locate the setting for 'network.negotiate-auth.trusted-uris'
Set the value of names of the servers to use NTLM with.
network.automatic-ntlm-auth.allow-proxies = True
Restart Firefox - Test URL to application
The problem occurs as your app is not running on the same domain as your service. You need to configure your Server to accept those calls by adding the 'Access-Control-Allow-Origin' Header.
When analyzing traffic with a packet sniffer, we are seeing an http response from a weblogic server prior to the completion of the http post to that server.
In this case, the jsp page on the server is basically a static page, no logic to do anything with the contents of the post at this time.
But why would the server send the response prior to completion of the post?
I found Weblogic documentation about how to configure the server to ignore a denial-of-service attack using Http post. Maybe that is what is happening?
No one I know has seen this behaviour before. Maybe some weblogic-savvy person will know what is going on.
Thanks
I don't think that Weblogic is analyzing the JSP to determine whether it is static or not.
My guess is that either
someone else was accessing the server at the same time
you saw the answer to a previous request
[EDIT] To determine what is going on, I suggest to set a breakpoint in the JSP. If you still get an answer without hitting the breakpoint, something further up the stack must be intercepting the request (for example, a cache).