The transfer attempted appeared to contain a data leak? - asp.net

Getting this error message in the browser:
Attention!!!
The transfer attempted appeared to contain a data leak!
URL=http://test-login.becreview.com/domain/User_Edit.aspx?UserID=b5d77644-b10e-44e0-a007-3b9a5e0f4fff
I've seen this before but I'm not sure what causes it. It doesn't look like a browser error or an asp.net error. Could it be some sort of proxy error? What causes it?
That domain is internal so you won't be able to go to it. Also the page has almost no styling. An h1 for "Attention!!!" and the other two lines are wrapped in p tags if that helps any.

For anyone else investigating this message, it appears to be a Fortinet firewall's default network data-leak prevention message.

It doesn't look like an ASP.NET error that I've ever seen.
If you think it might be a proxy message you should reconfigure your browser so it does not use a proxy server, or try to access the same URL from a machine that has direct access to the web server (and doesn't use the same proxy).

This is generated from an inline IPS sensor (usually an appliance or a VM) that is also configured to scan traffic for sensitive data (CC info, SSNs etc). Generally speaking, the end user cannot detect or bypass this proxy as it is deployed to be transparent. It is likely also inspecting all SSL traffic. In simple terms, it is performing a MITM attack because your organizational policy has specified that all traffic to and from your network be inspected.

Related

How to monitor video and https traffic using bro network security monitor

I have configured bro on my system successfully. OS is centos 7. I have to monotor multimedia traffic e.g. youtube and some social site like facebook. I started bro for some miniutes while using facebook and youtube but their is no information about youtube in http log file nithir facebook. As for I think that this is a protocol problem as facebook use https rather than http but I do not know why youtube.
I have followed following steps after setting correct interface.
[BroControl] > install
Then
[BroControl] > start
But I have not found any youtube or facebook info in http.log. How to get traffic info of such websites?
The problem is that you are expecting SSL encrypted traffic to be magically decrypted and appear in your http.log. If you look again, you will find that YouTube also runs over HTTPS.
Unless you are doing something to intercept and act as a man-in-the-middle for the SSL/TLS connections, you cannot expect to be able to see the content. If you can't see it, Bro can't see it either. :)
If you want to verify that you are properly configured, you would be best served looking at the conn.log to verify that the connections are occurring. Once you do that, search for the UID values in the other logs and I strongly suspect that you will see that you are finding SSL certificate data.
Several things come to mind
1) What are the contents of /usr/local/bro/etc/node.cfg? Make sure it is the interface you expect traffic to cross via a span or tap.
2) Run tcpdump -i <interface> where interface comes from question 1.
3) Run /usr/local/bro/bin/broctl diag to see if there are any issues.
4) Run /usr/local/bro/bin/broctl status to verify everything is running.
If the interface is wrong, the solution may be that easy.

ASP MVC: Can I drop a client connection programatically?

I have an ASP.NET Web API application running behind a load balancer. Some clients keep an HTTP busy connection alive for too much time, creating unnecessary affinity and causing high load on some server instances. In order to fix that, I wish to gracefully close a connection that is doing too much requests in a short period of time (thus forcing the client to reconnect and pick a different server instance) while at same time keeping low traffic connections alive indefinitely. Hence I cannot use a static configuration.
Is there some API that I can call to flag a request to "answer this then close the connection" ? Or can I simply add the Connection: close HTTP header that ASP.NET will see and close the connection for me?
It looks like the good solution for your situation will be the built-in IIS functionality called Dynamic IP restriction. "To provide this protection, the module temporarily blocks IP addresses of HTTP clients that make an unusually high number of concurrent requests or that make a large number of requests over small period of time."
It is supported by Azure Web Apps:
https://azure.microsoft.com/en-us/blog/confirming-dynamic-ip-address-restrictions-in-windows-azure-web-sites/
If that is the helpful answer, please mark it as a helpful or mark it as the answer. Thanks!
I am not 100% sure this would work in your situation, but in the past I have had to block people coming from specific IP addresses geographically and people coming from common proxies. I created an Authorized Attribute class following:
http://www.asp.net/web-api/overview/security/authentication-filters
In would dump the person out based on their IP address by returning a HttpStatusCode.BadRequest. On every request you would have to check a list of bad ips in the database and go from there. Maybe you can handle the rest client side, because they are going to get a ton of errors.
Write an action filter that returns a 302 Found response for the 'blocked' IP address. I would hope, the client would close the current connection and try again on the new location (which could just be the same URL as the original request).

The reason for a mandatory 'Host' clause in HTTP 1.1 GET

Last week I started quite a fuss in my Computer Networks class over the need for a mandatory Host clause in the header of HTTP 1.1 GET messages.
The reason I'm provided with, be it written on the Web or shouted at me by my classmates, is always the same: the need to support virtual hosting. However, and I'll try to be as clear as possible, this does not appear to make sense.
I understand that in order to allow two domains to be hosted in a single machine (and by consequence, share the same IP address), there has to exist a way of differentiating both domain names.
What I don't understand is why it isn't possible to achieve this without a Host clause (HTTP 1.0 style) by using an absolute URL (e.g. GET http://www.example.org/index.html) instead of a relative one (e.g. GET /index.html).
When the HTTP message got to the server, it (the server) would redirect the message to the appropriate host, not by looking at the Host clause but, instead, by looking at the hostname in the URL present in the message's request line.
I would be very grateful if any of you hardcore hackers could help me understand what exactly am I missing here.
This was discussed in this thread:
modest suggestions for HTTP/2.0 with their rationale.
Add a header to the client request that indicates the hostname and
port of the URL which the client is accessing.
Rationale: One of the most requested features from commercial server
maintainers is the ability to run a single server on a single port
and have it respond with different top level pages depending on the
hostname in the URL.
Making an absolute request URI required (because there's no way for the client to know on beforehand whether the server homes one or more sites) was suggested:
Re the first proposal, to incorporate the hostname somewhere. This
would be cleanest put into the URL itself :-
GET http://hostname/fred http/2.0
This is the syntax for proxy redirects.
To which this argument was made:
Since there will be a mix of clients, some supporting host name reporting
and some not, it just doesn't matter how this info gets to the server.
Since it doesn't matter, the easier to implement solution is a new HTTP
request header field. It allows all clients and servers to operate as they
do now with NO code changes. Clients and servers that actually need host
name information can have tiny mods made to send the extra header field
containing the URL and process it.
[...]
All I'm suggesting is that there is a better way to
implement the delivery of host name info to the server that doesn't involve
hacking the request syntax and can be backwards compatible with ALL clients
and servers.
Feel free to read on to discover the final decision yourself. But be warned, it's easy to get lost in there.
The reason for adding support for specifying a host in an HTTP request was the limited supply of IP addresses (which was not an issue yet when HTTP 1.0 came out).
If your question is "why specify the host in a Host header as opposed to on the Request-Line", the answer is the need for interopability between HTTP/1.0 and 1.1.
If the question is "why is the Host header mandatory", this has to do with the desire to speed up the transition away from assigned IP addresses.
Here's some background on the Internet address conservation with respect to HTTP/1.1.
The reason for the 'Host' header is to make explicit which host this request refers to. Without 'Host', the server must know ahead of time that it is supposed to route 'http://joesdogs.com/' to Joe's Dogs while it is supposed to route 'http://joscats.com/' to Jo's Cats even though they are on the same webserver. (What if a server has 2 names, like 'joscats.com' and 'joescats.com' that should refer to the same website?)
Having an explicit 'Host' header make these kinds of decisions much easier to program.

"Proxying" HTTP requests

I have some software which runs as a black box, I have no access to it. This software makes HTTP requests. What I want to do is intercept these requests, forward them on, catch the response, do something with it, before passing the response back to the software.
Can this be done? What's the best method?
Thanks
Edit: Requests are to the public internet from a local intranet via a gateway/router. I have root access to my machine. Another machine could be used as intermediate gateway.
Edit 2: Requests are not encrypted. What I am actually trying to do is save down any images that are requested.
Try yellosoft-alchemy.
If the communication isn't encrypted, use Ethereal (or any other similar program) to sniff the communication on the wire.
edit: since the communication isn't encrypted, you can do that easily with Ethereal. You can save each TCP stream independently from there.
Edit2: Ok, you want to do this automatically. In this case, I would suggest you look at two tools available on Linux called tcpflow and tcpreen.
tcpreen creates a proxy similar to what you want between a local port and a remote one. It's a TCP proxy, not an HTTP proxy so this means you'll have to write some parsing tool to isolate the HTTP streams that contain the images you want (probably based on the MIME type of the response). it's not too complex a task, though, if you understand how HTTP works.
tcpflow is similar to tcpreen except that it's a sniffer instead of a proxy. Use whatever tool you think its more adapted to your environment.

How to tell if a Request is coming from a Proxy?

Is it possible to detect if an incoming request is being made through a proxy server? If a web application "bans" users via IP address, they could bypass this by using a proxy server. That is just one reason to block these requests. How can this be achieved?
IMHO there's no 100% reliable way to achieve this but the presence of any of the following headers is a strong indication that the request was routed from a proxy server:
via:
forwarded:
x-forwarded-for:
client-ip:
You could also look for the proxy or pxy in the client domain name.
If a proxy server is setup properly to avoid the detection of proxy servers, you won't be able to tell.
Most proxy servers supply headers as others mention, but those are not present on proxies meant to completely hide the user.
You will need to employ several detection methods, such as cookies, proxy header detection, and perhaps IP heuristics to detect such situations. Check out http://www.osix.net/modules/article/?id=765 for some information on this situation. Also consider using a proxy blacklist - they are published by many organizations.
However, nothing is 100% certain. You can employ the above tactics to avoid most simple situations, but at the end of the day it's merely a series of packets forming a TCP/IP transaction, and the TCP/IP protocol was not developed with today's ideas on security, authentication, etc.
Keep in mind that many corporations deploy company wide proxies for various reasons, and if you simply block proxies as a general rule you necessarily limit your audience, and that may not always be desirable. However, these proxies usually announce themselves with the appropriate headers - you may end up blocking legitimate users, rather than users who are good at hiding themselves.
-Adam
Did a bit of digging on this after my domain got hosted up on Google's AppSpot.com with nice hardcore porn ads injected into it (thanks Google).
Taking a leaf from this htaccess idea I'm doing the following, which seems to be working. I added a specific rule for AppSpot which injects a HTTP_X_APPENGINE_COUNTRY ServerVariable.
Dim varys As New List(Of String)
varys.Add("VIA")
varys.Add("FORWARDED")
varys.Add("USERAGENT_VIA")
varys.Add("X_FORWARDED_FOR")
varys.Add("PROXY_CONNECTION")
varys.Add("XPROXY_CONNECTION")
varys.Add("HTTP_PC_REMOTE_ADDR")
varys.Add("HTTP_CLIENT_IP")
varys.Add("HTTP_X_APPENGINE_COUNTRY")
For Each vary As String In varys
If Not String.IsNullOrEmpty(HttpContext.Current.Request.Headers(vary)) Then HttpContext.Current.Response.Redirect("http://www.your-real-domain.com")
Next
You can look for these headers in the Request Object and accordingly decide whether request is via a proxy/not
1) Via
2) X-Forwarded-For
note that this is not a 100% sure shot trick, depends upon whether these proxy servers choose to add above headers.

Resources