I have install "Squid for windows" server on my network to provide shared Internet access to about 6 users and it is work fine.
My main purpose to setup squid is save bans width of my 4G Internet connection, but I do not find any traffic monitoring tool, to monitor users activity, hit rate and miss rate of server.
I want to know what are the available GUI monitoring tool for Squid and how to configure in windows environment?
squid cache manager tool will give you information about request rates, failure, CPU and memory utilization and such details. I'm not sure whether it give user-wise/ip-wise details of requests.
If you are using any authentication in proxy, you can log username against each log entry, write an app to process your squid access log so that you can get all details against ip/username
You can use squid analysis and report generator which will give you all required information.
Related
I have been researching on the concept of how event logs are collected from cloud based applications like dropbox without deploying agents...i haven't found any clear explanation based on this...it would be grateful if someone could explain.
This is a very broad topic and can be very confusing because everyone logs differently, so while i cannot answer the question definitively, I can hopefully help you along.
A good heuristic is to see if the cloud service supports one of the oldest logging standard, Syslog. Typically if they do, you will not need to deploy an agent, but configure log forwarding and listen for messages on Linux server you control (which already has a logging service running though it might need additional configuration). Also if the cloud service has a Syslog service running on the remote service, you potentially can use that service to forward logs to your Syslog server.
The mechanism used for transportation should be TLS because logs can unknowingly contain very sensitive data (Twitter just recently put out a security warning concerning this). You can see how to configure a Linux Syslog server with TLS here
I have a website hosted in Azure Websites as a Basic tier website.
I'm currently in the development stage, yet the site is live and accessible by the outside world (at least at a basic level), so I wanted to better understand the monitoring features in the Azure management portal.
When I looked at the monitoring tab inside the portal, I see an odd pattern for HTTP success. Looking at the past 60 minutes (which I personally have not been active on), the HTTP successes are very cyclic, with 80 connections, then 0, then 40, then 0, then repeat.
Does anyone have any pointers how I can figure out what the 80 and 40 connections are. I certainly don't have any timed events in my code, so there shouldn't be any calls being made unless a person is actually hitting the site.
UPDATE:
I setup a staging server and blocked all incoming traffic except my own IP. So the same code running, just without access from the outside world. And the HTTP success appears only when I hit the server myself (as expected). This suggests that my site is being hit by an outside bot maybe? Does anyone know how to protect against this? Or at least diagnose if the requests are not legitimate, etc?
I'd say it's this setting that causes the traffic:
Always On. By default, websites are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the site loaded all the time. If your site runs continuous web jobs, you should enable Always On, or the web jobs may not run reliably
http://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
It's just a keep alive to avoid cold starts every time you or someone else visit your site.
Here's another reference that describes this behavior:
What the always-on feature does is simply ping your site every now and
then, to keep the application pool up and running.
And Scott Gu says:
One of the other useful Web Site features that we are introducing
today is a feature we call “Always On”. When Always On is enabled on a
site, Windows Azure will automatically ping your Web Site regularly to
ensure that the Web Site is always active and in a warm/running state.
This is useful to ensure that a site is always responsive (and that
the app domain or worker process has not paged out due to lack of
external HTTP requests).
About the traffic in general: First of all, the requests could really only come from Microsoft, since any traffic pattern like this will quickly be automatically detected and blocked when using Azure Websites - you cannot set up a keep alive like this yourself. Second, no modern bot whatsoever would regularily ping a specific page with that kind of regularity since it's all to obvious. Any modern datacenter security appliance would catch that kind of traffic and block/ignore/nullroute it.
As for your question regarding protection and security: Microsoft cannot protect your code from yourself. However, everything at the perimeter is managed and handled by Microsoft. That's one of the USP features of Azure - Firewall, Load Balancing, Spoofing, Anti-bot and DDOS protection etc. There will of course always be security concerns regarding any publicly exposed service but you can stay focused on your application while Microsoft manages the rest.
When running Azure Websites, you're in the hands of Microsoft regarding security outside of your application scope. That's a great thing, but if you really like to be able to use other security measures you'll have to set up a virtual machine instead and run your site from there.
You may want to first understand what are these requests. Enable web server logging for the website on Azure Management portal and download IIS logs for your website after seeing this pattern. Then check those to understand the URL, client ip addresses for the requests and user agent field to identify if the requests are really from search bots. Based on the observation, you can either disable some IP statically, use dynamic ip restrictions or configure URLREWRITE to block requests with specific patterns in request or request headers
EDIT
This is how you can block search bots - http://moz.com/ugc/blocking-bots-based-on-useragent
You can configure the URLREWRITE locally on an IIS server in the way described in the above article and then copy the configuration generated in the web.config or connect to the azure website directly using IIS manager as described in http://azure.microsoft.com/blog/2014/02/28/remote-administration-of-windows-azure-websites-using-iis-manager/ and configure urlrewrite rule
I'm wondering if there is a way to capture some SAML POST tokens/data in the network traffic without using 3rd party software such as Fiddler 2, and without having admin rights to the computer to upgrade web browsers or install anything? I myself would need to remote into this persons computer, and try to capture the data that I need to look at for an issue that is presenting itself. But the persons computer I would remote into does not have admin rights to install any software of any kind, or even do updates for that matter. They are running IE8. Is there a way to capture network traffic from their computer without admin rights or 3rd party software?
Honoring your request to not consider 3rd party software...
Depending on user permissions available, you could try setting IE HTTP proxy settings to use a remote IP you control - one presumably running a proxy/debug tool of choice. For example, you could run a small VM in a cloud such as Amazon EC2, run a tool such as Fiddler, Burp Suite, Charles, etc., and inspect traffic on the user's behalf. Most HTTP debuggers like this do support configuration to allow remote computers to use them as proxy.
Can anyone help me with finding a way to do 'per user quota' on squid like people login to the proxy they have a cap like 1GB or something (Might be different for each user) and when they use 1GB of Bandwidth it stops their internet and redirects them to a webpage on the a web server saying they have run out of internet etc..
Squid doesn't have anything to do that. You have to code an external helper and a process that counts the bytes transferred from the logs. I'd write a daemon that receives the logs via UDP which could be queried by the external helper to check if the user is over quota.
I would like to duplicate the functionality of some web filtering software, however I don't want the user to have to configure their browser. Some other products on the market do this without any apparent configuration in the browser settings.
The user would be installing this for themselves, so air-tight filter security is not a priority. But ease of installation and the ability to apply to an arbitrary browser would be important.
Since the vision is standalone desktop software, inserting a filter on another upstream machine is not really an option.
You will need software that runs on a network node that all internet flows through, and it will have to intercept HTTP requests and redirect them accordingly.
Some routers have this sort of capability, it can also be accomplished with linux routers using iptables and a squid proxy.
Install your program as a proxy for all HTTP traffic.
Windows Filtering Platform
Windows Filtering Platform (WFP) is a set of API and system services that provide a platform for creating network filtering applications. The WFP API allows developers to write code that interacts with the packet processing that takes place at several layers in the networking stack of the operating system. Network data can be filtered and also modified before it reaches its destination.
http://msdn.microsoft.com/en-us/library/aa366510%28VS.85%29.aspx