I have some webservices which are called by some clients and that includes through mobile and web. I have no control on the clients code.
But, I need to identify who is calling my web services, via the IP address or something else.
Is there any way to identify that?
A better approach to tracking this sort of thing is to introduce the notion of an API key. That way you know exactly who is using your service and you can track their usage etc.
On every call to your service the user would have to provide their key as a means of authorisation (not authentication). This sort of approach can generally help avoid misuse of an API, however, it can't eradicate it completely. At least with this approach if you do find malicious user it's as simple as disabling that particular API key.
You should check your IIS Logs, these will list (if you have them turned on, default they are on) all the requests made to your server.
So search through the log for the URL of the service and check the logs around the time of requests you are having issues with and it will list the IP address.
Your logs can generally be found at: C:\inetpub\logs\LogFiles
If the folder is empty then you are out of luck currently, you will need to turn logging on in IIS and then you will be able to check them after a few hours and start seeing where requests are coming from.
E.g a sample from a log.
2012-10-29 04:49:44 129.35.250.132 GET /favicon.ico/sign-in returnUrl=%252ffavicon.ico 82 - 27.x.x.x Mozilla/5.0+(Windows+NT+6.1;+rv:16.0)+Gecko/20100101+Firefox/16.0 200 0 0 514
So the first highlighted item is the date and time, and the second highlighted item is the IP address (redacted as it's a real log.)
Related
I've been looking around but I couldn't find anything useful. What would be the best practice of securing a Symfony app from brute force attacks? I looked into the SecurityBundle but I couldn't find anything.
Something that I do for this is that I keep a log using event subscribers based on IP addresses and/or usernames attempting to log in. Then, if after an x amount of time an IP/User has tried to log in with a failure then I move that IP address/User to a ban list.. and after that anytime that IP/User tries to log in I deny it right away based on that ban list.
You can also play with the time between attempts and all those goodies inside the event subscriber
Let me know if it makes sense.
Use cloudflare for DDOS attacks. However it may be expensive.
You can prevent dictionary attacks using https://github.com/codeconsortium/CCDNUserSecurityBundle
Honestly I do that with my web/cache server when I need to. I recently used varnish cache to do that with a plugin called vsthrottle. (which is probably one of many things you can use on the server level) the advantage of doing it on the webserver level instead of symfony is that you are not even hitting the php level and compiling all the vendors just to end up rejecting a request, and you are not using a separate data storage (be it mysql or something fast like memcached) to log every request and compare on the next one... If the request reaches the php layer, then it already cost you some performance, and a DDOS of that type will still hurt you even if you are returning a rejection from symfony because it is causing the server to compile php and part of the symfony code.
If you insist on doing it in symfony, register a listener that listens on all requests, parse request headers for either IP addresses or X_forwarded_for (in case you are behind a load balancer in which case only the load balancer ip will keep showing with regular ip check) and then find a suitable way to keep track of all requests up to a minute old (you could probably use memcached for fast storage, with a smart way to increment counts for each ips) and if an ip hits you more than lets say 100 times per the last 1 minute, you return a forbidden response or a too many requests response instead of the usual one... But I do not recommend this as usually built solutions (like the varnish I used) are better, in my case I could throttle for specific routes and not others for example.
I have a web site that reports about each non-expected server side error on my email.
Quite often (once each 1-2 weeks) somebody launches automated tools that bombard the web site with a ton of different URLs:
sometimes they (hackers?) think my site has inside phpmyadmin hosted and they try to access vulnerable (i believe) php-pages...
sometimes they are trying to access pages that are really absent but belongs to popular CMSs
last time they tried to inject wrong ViewState...
It is clearly not search engine spiders as 100% of requests that generated errors are requests to invalid pages.
Right now they didn't do too much harm, the only one is that I need to delete a ton of server error emails (200-300)... But at some point they could probably find something.
I'm really tired of that and looking for the solution that will block such 'spiders'.
Is there anything ready to use? Any tool, dlls, etc... Or I should implement something myself?
In the 2nd case: could you please recommend the approach to implement? Should I limit amount of requests from IP per second (let's say not more than 5 requests per second and not more then 20 per minute)?
P.S. Right now my web site is written using ASP.NET 4.0.
Such bots are not likely to find any vulnerabilities in your system, if you just keep the server and software updated. They are generally just looking for low hanging fruit, i.e. systems that are not updated to fix known vulnerabilities.
You could make a bot trap to minimise such traffic. As soon as someone tries to access one of those non-existant pages that you know of, you could stop all requests from that IP address with the same browser string, for a while.
There are a couple of things what you can consider...
You can use one of the available Web Application Firewalls. It usually has set of rules and analytic engine that determine suspicious activities and react accordingly. For example in you case it can automatically block attempts to scan you site as it recognize it as a attack pattern.
More simple (but not 100% solution) approach is check referer url (referer url description in wiki) and if request was originating not from one of you page you rejected it (you probably should create httpmodule for that purpose).
And of cause you want to be sure that you site address all known security issues from OWASP TOP 10 list (OWASP TOP 10). You can find very comprehensive description how to do it for asp.net here (owasp top 10 for .net book in pdf), i also recommend to read the blog of the author of the aforementioned book: http://www.troyhunt.com/
Theres nothing you can do (reliabily) to prevent vulernability scanning, the only thing to do really is to make sure you are on top of any vulnerabilities and prevent vulernability exploitation.
If youre site is only used by a select few and in constant locations you could maybe use an IP restriction
I need to design a bug alert system, where the web support team is notified via email when a user of our website encounters an error of any sort (database exception, or a 404)
What would be the best way to design this section of the project? Any ideas would be appreciated.
You may want to look into using the global.asax file for application-wide error intercepting. A quick search yields this step-by-step walk-through:
http://aspnetresources.com/articles/CustomErrorPages.aspx
Depending on the volume of traffic you're expecting, sending an e-mail every time an error is intercepted may not be the best approach. At best, you'd flood inboxes (and make the support staff very unhappy), and at worst you'd get your mail servers blacklisted for spamming. The approach that I've used in the past on high-traffic sites is to queue up errors in a table that is read and purged at a set interval by a separate process. The process would aggregate the errors, grouping them by type, number of occurrences, etc, then send out an e-mail report to the support mailing lists.
ASP.NET health monitoring may be of interest: http://msdn.microsoft.com/en-us/library/ms998306.aspx. It's really simpler to use than this article first appears and doesn't require any additional components - it's all built-in.
I would implement an HTTPmodule that captures the onError event.
This is would allow the module to be reused over multiple applications. The destination email addresses, SMTP server etc, could be in the HTTPmodule, overriden in the web.config file for maximum flexibility.
I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.
Hi, We are developing a multi-tenant application in Asp.Net with separate Database for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant,
i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both.
so what are the available options, the ones which i can think of can be
IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant.
Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that.
Use some third part components if available
So what do you think will be the best approach, also if there is any other way to do this.
Ok, here is an idea (that I have not test, leave that to you)
On global.asax
use one of this function (find the one that have a valid final size)
Application_PostRequestHandlerExecute
Application_ReleaseRequestState
and get the size that you have send with
Response.Filter.Length
No need to metion, that you get the filename of the call using the
HttpContext.Current.Request.Path
This functions called with every single request, so you can get your size and you do the rest.
Here must note, that you need first to test this idea to see if its work, and maybe improve it, and have in mine that if you have compress the pages on server the length is not the correct and maybe you need to compress it on Global.asax to have the actually lenght.
Hope this help.
Well, since the IIS logs already contain the request size and response size, it doesn't seem like too much trouble to develop a small tool to parse them and calculate the total per day/week/month/whatever.
Trying to segment traffic based on host is difficult in my experience. Instead, if you give each tenant their own IP(s) for the applications you should be able to find programs that will monitor bandwidth based on IP.
ADDITION Is the structure of IIS that you have one website to rule them all for all tenants and on login the system forks to the proper database? If so, this may create problems with respect to versioning in that all tenant's sites will all have to have exactly the same schema and would all need to be updated simultaneously when you update the application such that a schema change is required.
Another structure, which sounds like what you may have, is that each tenant has their own website like so:
tenant1_site/appvirtualdir
tenant2_site/appvirtualdir
...
Where the appvirtualdir points to the same physical path for all tenant's sites. When all clients have the same application version, they are all using literally the same code. If you have this scenario and some sort of authentication, then you will need one IP per tenant anyway because of SSL. SSL will only bind to IP and port unlike non-SSL which will bind to IP, port and host. If that were the case, then monitoring traffic based on IP will still be simpler and more accurate as it could be done at the router or via a network monitor.