I've been looking around but I couldn't find anything useful. What would be the best practice of securing a Symfony app from brute force attacks? I looked into the SecurityBundle but I couldn't find anything.
Something that I do for this is that I keep a log using event subscribers based on IP addresses and/or usernames attempting to log in. Then, if after an x amount of time an IP/User has tried to log in with a failure then I move that IP address/User to a ban list.. and after that anytime that IP/User tries to log in I deny it right away based on that ban list.
You can also play with the time between attempts and all those goodies inside the event subscriber
Let me know if it makes sense.
Use cloudflare for DDOS attacks. However it may be expensive.
You can prevent dictionary attacks using https://github.com/codeconsortium/CCDNUserSecurityBundle
Honestly I do that with my web/cache server when I need to. I recently used varnish cache to do that with a plugin called vsthrottle. (which is probably one of many things you can use on the server level) the advantage of doing it on the webserver level instead of symfony is that you are not even hitting the php level and compiling all the vendors just to end up rejecting a request, and you are not using a separate data storage (be it mysql or something fast like memcached) to log every request and compare on the next one... If the request reaches the php layer, then it already cost you some performance, and a DDOS of that type will still hurt you even if you are returning a rejection from symfony because it is causing the server to compile php and part of the symfony code.
If you insist on doing it in symfony, register a listener that listens on all requests, parse request headers for either IP addresses or X_forwarded_for (in case you are behind a load balancer in which case only the load balancer ip will keep showing with regular ip check) and then find a suitable way to keep track of all requests up to a minute old (you could probably use memcached for fast storage, with a smart way to increment counts for each ips) and if an ip hits you more than lets say 100 times per the last 1 minute, you return a forbidden response or a too many requests response instead of the usual one... But I do not recommend this as usually built solutions (like the varnish I used) are better, in my case I could throttle for specific routes and not others for example.
Related
I am looking for as reliable and accurate / quick means possible to add in some htaccess code to block visits to a website from countries / IPs which are not in the white listed list of countries I want to allow access for. I have looked at https://www.ip2location.com/free/visitor-blocker which seems to offer a solution - for the 4 allowed countries I want to allow access - it has created a 4.1MB htaccess file! Will this mean slow access when someone attempts to view the site? I guess using a free service like this means the data is likely nowhere near comprehensive?
Does anyone have any suggestions on a good way to allow just visitors from a few countries access to a website?
It sounds like the service you used basically tried tried to brute force the blacklist. If you look into the htaccess file I'm sure you will be a long list of hard coded IP blocks.
In my opinion this is a terrible way to handle a geographic blacklist. To your original question - there is no "most reliable, most accurate, and quickest" method. Those are separate categories and you will need to preference one over the next.
For performance you could consider blacklisting at the routing level / dns server / proxy. This obviously isn't going to be the quickest way in terms of performance. There are Apache Modules that exist that allow you to use a local database to compare the incoming IP address with a list of known IP blocks from the blacklisted country. One of the main issues with this is that you need to constantly update your database to take in new IP blocks.
In my opinion the "best" method to do this is a simple redirect at the application layer using server side code. There exists several geographic API's where you can send in the IP or Hostname and get back a country of origin. An example:
$xml= new SimpleXMLElement(file_get_contents('http://www.freegeoip.net/xml/{IP_or_hostname}'));
if($xml->CountryCode == "US") {
header('Location: http://www.google.com');
}
There are two ways to block a visitor in web server. One is using firewall (.htaccess etc) and another one is using server-side scripting (PHP etc).
If you are concern of the performance of the firewall option, then you can download the IP2Location LITE database from http://lite.ip2location.com and implement the database in your local server. For every connection, you query the visitor IP address and find their country. You can redirect or block them using the PHP codes. Please find the complete steps in https://www.ip2location.com/tutorials/redirect-web-visitors-by-country-using-php-and-mysql-database
There is also another option to use remote geolocation API. However, we do not suggest this method because of network latency. It will slow down all user experience due to API queries.
This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)
I have some webservices which are called by some clients and that includes through mobile and web. I have no control on the clients code.
But, I need to identify who is calling my web services, via the IP address or something else.
Is there any way to identify that?
A better approach to tracking this sort of thing is to introduce the notion of an API key. That way you know exactly who is using your service and you can track their usage etc.
On every call to your service the user would have to provide their key as a means of authorisation (not authentication). This sort of approach can generally help avoid misuse of an API, however, it can't eradicate it completely. At least with this approach if you do find malicious user it's as simple as disabling that particular API key.
You should check your IIS Logs, these will list (if you have them turned on, default they are on) all the requests made to your server.
So search through the log for the URL of the service and check the logs around the time of requests you are having issues with and it will list the IP address.
Your logs can generally be found at: C:\inetpub\logs\LogFiles
If the folder is empty then you are out of luck currently, you will need to turn logging on in IIS and then you will be able to check them after a few hours and start seeing where requests are coming from.
E.g a sample from a log.
2012-10-29 04:49:44 129.35.250.132 GET /favicon.ico/sign-in returnUrl=%252ffavicon.ico 82 - 27.x.x.x Mozilla/5.0+(Windows+NT+6.1;+rv:16.0)+Gecko/20100101+Firefox/16.0 200 0 0 514
So the first highlighted item is the date and time, and the second highlighted item is the IP address (redacted as it's a real log.)
I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.
Hi, We are developing a multi-tenant application in Asp.Net with separate Database for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant,
i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both.
so what are the available options, the ones which i can think of can be
IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant.
Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that.
Use some third part components if available
So what do you think will be the best approach, also if there is any other way to do this.
Ok, here is an idea (that I have not test, leave that to you)
On global.asax
use one of this function (find the one that have a valid final size)
Application_PostRequestHandlerExecute
Application_ReleaseRequestState
and get the size that you have send with
Response.Filter.Length
No need to metion, that you get the filename of the call using the
HttpContext.Current.Request.Path
This functions called with every single request, so you can get your size and you do the rest.
Here must note, that you need first to test this idea to see if its work, and maybe improve it, and have in mine that if you have compress the pages on server the length is not the correct and maybe you need to compress it on Global.asax to have the actually lenght.
Hope this help.
Well, since the IIS logs already contain the request size and response size, it doesn't seem like too much trouble to develop a small tool to parse them and calculate the total per day/week/month/whatever.
Trying to segment traffic based on host is difficult in my experience. Instead, if you give each tenant their own IP(s) for the applications you should be able to find programs that will monitor bandwidth based on IP.
ADDITION Is the structure of IIS that you have one website to rule them all for all tenants and on login the system forks to the proper database? If so, this may create problems with respect to versioning in that all tenant's sites will all have to have exactly the same schema and would all need to be updated simultaneously when you update the application such that a schema change is required.
Another structure, which sounds like what you may have, is that each tenant has their own website like so:
tenant1_site/appvirtualdir
tenant2_site/appvirtualdir
...
Where the appvirtualdir points to the same physical path for all tenant's sites. When all clients have the same application version, they are all using literally the same code. If you have this scenario and some sort of authentication, then you will need one IP per tenant anyway because of SSL. SSL will only bind to IP and port unlike non-SSL which will bind to IP, port and host. If that were the case, then monitoring traffic based on IP will still be simpler and more accurate as it could be done at the router or via a network monitor.