I have a firewall implementation and I want to log all the websites visited on the machine. So when the user enters an address in the browser(any browser) or clicks a link to be able to log the visited address.
The problem is that I want to log only the visited address and NOT the other resources requested by the page (ads, iframes, Google stats and so on). Is there a method to do this by looking at the HTTP or TCP headers? Or any other method.
Thank you.
A possible method would be to use "transparent proxying" : Have the firewall automatically transfer all out-bound HTTP connections to a proxy. You'll find the desired information in the proxy's log.
A somehow easier method I found was to use Microsoft® Active Accessibility® and to read the URL from the browser's address bar. But this is tricky in other ways: you have to take into consideration multiple browsers UI layout(at least the most popular ones) and also the differences between versions of the same browser. Some browsers or browser versions have limited support for MSAA and don't expose all the controls (e.g Opera 10.50-10.51, altghough this was fixed in 10.52).
Related
I have configured bro on my system successfully. OS is centos 7. I have to monotor multimedia traffic e.g. youtube and some social site like facebook. I started bro for some miniutes while using facebook and youtube but their is no information about youtube in http log file nithir facebook. As for I think that this is a protocol problem as facebook use https rather than http but I do not know why youtube.
I have followed following steps after setting correct interface.
[BroControl] > install
Then
[BroControl] > start
But I have not found any youtube or facebook info in http.log. How to get traffic info of such websites?
The problem is that you are expecting SSL encrypted traffic to be magically decrypted and appear in your http.log. If you look again, you will find that YouTube also runs over HTTPS.
Unless you are doing something to intercept and act as a man-in-the-middle for the SSL/TLS connections, you cannot expect to be able to see the content. If you can't see it, Bro can't see it either. :)
If you want to verify that you are properly configured, you would be best served looking at the conn.log to verify that the connections are occurring. Once you do that, search for the UID values in the other logs and I strongly suspect that you will see that you are finding SSL certificate data.
Several things come to mind
1) What are the contents of /usr/local/bro/etc/node.cfg? Make sure it is the interface you expect traffic to cross via a span or tap.
2) Run tcpdump -i <interface> where interface comes from question 1.
3) Run /usr/local/bro/bin/broctl diag to see if there are any issues.
4) Run /usr/local/bro/bin/broctl status to verify everything is running.
If the interface is wrong, the solution may be that easy.
If your computer is infected, apparently Google will tell you so - as shown in the image below:
According to this article, Google use HTTP headers to work this out. But how do they do it, what sort of headers should we look for?
Thank you!
The Google Security blog post you linked doesn't mention HTTP headers.
A key point in the blog post is this:
This particular malware causes infected computers to send traffic to Google through a small number of intermediary servers called “proxies.”
And this:
...taking steps to notify users whose traffic is coming through these proxies...
Google doesn't say much about the proxies, for instance if they were standards-compliant(ish) HTTP proxies or just servers echoing the user requests.
The "unusual" traffic that originated from Google would have been from a small set of IP addresses. No special HTTP headers would be necessary. Google only had to add the warning message to pages being served to the suspect IP addresses. That's it.
The term "signature" in the the follow up link from your comments is used very informally, probably alluding to the IP addresses of the proxy servers. If you want to imagine something more complicated than that, then I suppose it's possible that these proxies (like many HTTP clients) could be detected by some pattern of HTTP headers unique to them. For example the User-Agent or Via headers, or even something more subtle like the ordering or capitalization of headers. I doubt it came to that though, and I don't see much value in speculating, especially two years after the fact.
Getting this error message in the browser:
Attention!!!
The transfer attempted appeared to contain a data leak!
URL=http://test-login.becreview.com/domain/User_Edit.aspx?UserID=b5d77644-b10e-44e0-a007-3b9a5e0f4fff
I've seen this before but I'm not sure what causes it. It doesn't look like a browser error or an asp.net error. Could it be some sort of proxy error? What causes it?
That domain is internal so you won't be able to go to it. Also the page has almost no styling. An h1 for "Attention!!!" and the other two lines are wrapped in p tags if that helps any.
For anyone else investigating this message, it appears to be a Fortinet firewall's default network data-leak prevention message.
It doesn't look like an ASP.NET error that I've ever seen.
If you think it might be a proxy message you should reconfigure your browser so it does not use a proxy server, or try to access the same URL from a machine that has direct access to the web server (and doesn't use the same proxy).
This is generated from an inline IPS sensor (usually an appliance or a VM) that is also configured to scan traffic for sensitive data (CC info, SSNs etc). Generally speaking, the end user cannot detect or bypass this proxy as it is deployed to be transparent. It is likely also inspecting all SSL traffic. In simple terms, it is performing a MITM attack because your organizational policy has specified that all traffic to and from your network be inspected.
There are a specific set of process happens between a user hits www.google.com and see the page in the browser. Can anybody tell me what all things that happen during a similar process. Also how mobile browser is different from web browser.
This really depends on what browsers you're comparing. For example, Safari Mobile and Safari for Mac are quite similar to one another, so much so that you often see the same page on both. However IE for Pocket PCs is much different than IE8 and pages would render somewhat differently in those two.
Usually, site operators check the UserAgent string that all browsers have, to see which browser it is. Then, it's up to the site operator to show a mobile site or a regular site based on whether they want to or not.
PPK has a great list of all browser quirks and features, at quirksmode.org. It's a must-read for mobile development.
Name resolution. www.google.com gets resolved to an IP address through domain name
HTTP Request. The browser sends a GET request to server.
HTTP Response. The server sends back an HTTP response.
Parse. The client parses the resulting document and resolves referenced assets (css, images, etc)
HTTP Requests. For each referenced asset, the browser sends another request to the server.
HTTP Response. For each referenced asset, the server responds.
In this respect, how http is requested, mobile is not different than desktop.
Aame stuff happens, mobile browser(s) renders html documents like your pc browser(s).
Of course they might have have less memory, different rendering engine(s), run on a very small screen etc etc etc. But, at the end it is just another http request to google.com.
Depending on network or connection type to the net there might be another difference. Operator gateway/proxy. Some operators filter/proxy all communication to the net.
Also (usually) internet traffic from operator's customers to the net routed through couple of public IPs
I want to know that when browser sends a request do the server sends back the contents explicitly? And how would i confirm it?
There are several toolbars in Firefox that show exactly what are coming and going when making an HTTP request.
For firefox i use the following plugins:
Firebug
Web Developer
You could also install a utility called WireShark. It will "sniff" all the network traffic on your computer and show you at a packet level how it all works.
Browser plugins such as firebug (for firefox) let you see exactly what the server is returning; that's quite instructive and recommended! You'll see a bunch of headers followed by the response body in any of several formats (could be chunked, etc, etc).
In a Windows environment you can use Fiddler.
Fiddler includes a fair amount of documentation and is easy to use.